Jan 23 01:09:45.868104 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:09:45.868142 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:09:45.868161 kernel: BIOS-provided physical RAM map: Jan 23 01:09:45.868172 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:09:45.868183 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 23 01:09:45.868193 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 23 01:09:45.868207 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 23 01:09:45.868219 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 23 01:09:45.868230 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 23 01:09:45.868242 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 23 01:09:45.868254 kernel: NX (Execute Disable) protection: active Jan 23 01:09:45.868268 kernel: APIC: Static calls initialized Jan 23 01:09:45.868280 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 23 01:09:45.868292 kernel: extended physical RAM map: Jan 23 01:09:45.868307 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:09:45.868320 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 23 01:09:45.868336 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 23 01:09:45.868349 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 23 01:09:45.868361 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 23 01:09:45.868374 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 23 01:09:45.868387 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 23 01:09:45.868400 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 23 01:09:45.868413 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 23 01:09:45.868426 kernel: efi: EFI v2.7 by EDK II Jan 23 01:09:45.868439 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 23 01:09:45.868452 kernel: secureboot: Secure boot disabled Jan 23 01:09:45.868465 kernel: SMBIOS 2.7 present. Jan 23 01:09:45.868480 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 23 01:09:45.868493 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:09:45.868506 kernel: Hypervisor detected: KVM Jan 23 01:09:45.868519 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 23 01:09:45.868532 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:09:45.868544 kernel: kvm-clock: using sched offset of 5148871169 cycles Jan 23 01:09:45.868559 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:09:45.868571 kernel: tsc: Detected 2499.996 MHz processor Jan 23 01:09:45.868583 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:09:45.868595 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:09:45.868610 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 23 01:09:45.868621 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 01:09:45.868635 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:09:45.868654 kernel: Using GB pages for direct mapping Jan 23 01:09:45.868667 kernel: ACPI: Early table checksum verification disabled Jan 23 01:09:45.868680 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 23 01:09:45.868695 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 01:09:45.868712 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 01:09:45.868725 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 01:09:45.868739 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 23 01:09:45.868768 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 23 01:09:45.868783 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 01:09:45.868798 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 01:09:45.868814 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 23 01:09:45.868828 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 23 01:09:45.868847 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 23 01:09:45.868863 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 23 01:09:45.868878 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 23 01:09:45.868893 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 23 01:09:45.868908 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 23 01:09:45.868921 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 23 01:09:45.868936 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 23 01:09:45.868951 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 23 01:09:45.868970 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 23 01:09:45.868985 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 23 01:09:45.869000 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 23 01:09:45.869015 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 23 01:09:45.869029 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 23 01:09:45.869045 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 23 01:09:45.869060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 23 01:09:45.869075 kernel: NUMA: Initialized distance table, cnt=1 Jan 23 01:09:45.869089 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 23 01:09:45.869106 kernel: Zone ranges: Jan 23 01:09:45.869119 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:09:45.869131 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 23 01:09:45.869145 kernel: Normal empty Jan 23 01:09:45.869159 kernel: Device empty Jan 23 01:09:45.869172 kernel: Movable zone start for each node Jan 23 01:09:45.869184 kernel: Early memory node ranges Jan 23 01:09:45.869194 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 01:09:45.869215 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 23 01:09:45.869227 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 23 01:09:45.869243 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 23 01:09:45.869255 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:09:45.869268 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 01:09:45.869282 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 23 01:09:45.869295 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 23 01:09:45.869308 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 23 01:09:45.869322 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:09:45.869334 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 23 01:09:45.869350 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:09:45.869365 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:09:45.869379 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:09:45.869391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:09:45.869405 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:09:45.869417 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 01:09:45.869431 kernel: TSC deadline timer available Jan 23 01:09:45.869444 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:09:45.869457 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:09:45.869472 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:09:45.869489 kernel: CPU topo: Max. threads per core: 2 Jan 23 01:09:45.869501 kernel: CPU topo: Num. cores per package: 1 Jan 23 01:09:45.869514 kernel: CPU topo: Num. threads per package: 2 Jan 23 01:09:45.869527 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:09:45.869540 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:09:45.869553 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 23 01:09:45.869566 kernel: Booting paravirtualized kernel on KVM Jan 23 01:09:45.869580 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:09:45.869594 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:09:45.869607 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:09:45.869625 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:09:45.869640 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:09:45.869655 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:09:45.869670 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:09:45.869686 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:09:45.869700 kernel: random: crng init done Jan 23 01:09:45.869713 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:09:45.869729 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 01:09:45.869742 kernel: Fallback order for Node 0: 0 Jan 23 01:09:45.870821 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 23 01:09:45.870839 kernel: Policy zone: DMA32 Jan 23 01:09:45.870869 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:09:45.870888 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:09:45.870905 kernel: Kernel/User page tables isolation: enabled Jan 23 01:09:45.870921 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:09:45.870936 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:09:45.870951 kernel: Dynamic Preempt: voluntary Jan 23 01:09:45.870967 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:09:45.870983 kernel: rcu: RCU event tracing is enabled. Jan 23 01:09:45.871001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:09:45.871017 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:09:45.871032 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:09:45.871047 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:09:45.871063 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:09:45.871079 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:09:45.871100 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:09:45.871118 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:09:45.871135 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:09:45.871152 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 01:09:45.871169 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:09:45.871186 kernel: Console: colour dummy device 80x25 Jan 23 01:09:45.871203 kernel: printk: legacy console [tty0] enabled Jan 23 01:09:45.871220 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:09:45.871241 kernel: ACPI: Core revision 20240827 Jan 23 01:09:45.871258 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 23 01:09:45.871275 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:09:45.871291 kernel: x2apic enabled Jan 23 01:09:45.871308 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:09:45.871325 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 23 01:09:45.871342 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 23 01:09:45.871358 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 23 01:09:45.871375 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 23 01:09:45.871394 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:09:45.871410 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:09:45.871426 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:09:45.871444 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 01:09:45.871458 kernel: RETBleed: Vulnerable Jan 23 01:09:45.871471 kernel: Speculative Store Bypass: Vulnerable Jan 23 01:09:45.871484 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:09:45.871500 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:09:45.871517 kernel: GDS: Unknown: Dependent on hypervisor status Jan 23 01:09:45.871533 kernel: active return thunk: its_return_thunk Jan 23 01:09:45.871549 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 01:09:45.871569 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:09:45.871586 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:09:45.871602 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:09:45.871619 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 23 01:09:45.871635 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 23 01:09:45.871651 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 01:09:45.871667 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 01:09:45.871684 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 01:09:45.871700 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 01:09:45.871717 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:09:45.871734 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 23 01:09:45.871768 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 23 01:09:45.871786 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 23 01:09:45.871802 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 23 01:09:45.871815 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 23 01:09:45.871829 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 23 01:09:45.871842 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 23 01:09:45.871857 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:09:45.871870 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:09:45.871884 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:09:45.871898 kernel: landlock: Up and running. Jan 23 01:09:45.871913 kernel: SELinux: Initializing. Jan 23 01:09:45.871929 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 01:09:45.871946 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 01:09:45.871961 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 23 01:09:45.871976 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 23 01:09:45.871992 kernel: signal: max sigframe size: 3632 Jan 23 01:09:45.872007 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:09:45.872024 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:09:45.872038 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:09:45.872054 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:09:45.872069 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:09:45.872087 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:09:45.872102 kernel: .... node #0, CPUs: #1 Jan 23 01:09:45.872118 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 23 01:09:45.872135 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 23 01:09:45.872150 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:09:45.872166 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 23 01:09:45.872183 kernel: Memory: 1899856K/2037804K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 133384K reserved, 0K cma-reserved) Jan 23 01:09:45.872199 kernel: devtmpfs: initialized Jan 23 01:09:45.872215 kernel: x86/mm: Memory block size: 128MB Jan 23 01:09:45.872235 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 23 01:09:45.872252 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:09:45.872269 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:09:45.872285 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:09:45.872301 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:09:45.872318 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:09:45.872335 kernel: audit: type=2000 audit(1769130584.096:1): state=initialized audit_enabled=0 res=1 Jan 23 01:09:45.872351 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:09:45.872371 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:09:45.872387 kernel: cpuidle: using governor menu Jan 23 01:09:45.872403 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:09:45.872418 kernel: dca service started, version 1.12.1 Jan 23 01:09:45.872433 kernel: PCI: Using configuration type 1 for base access Jan 23 01:09:45.872449 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:09:45.872465 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:09:45.872480 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:09:45.872495 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:09:45.872514 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:09:45.872529 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:09:45.872545 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:09:45.872560 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:09:45.872576 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 23 01:09:45.872591 kernel: ACPI: Interpreter enabled Jan 23 01:09:45.872606 kernel: ACPI: PM: (supports S0 S5) Jan 23 01:09:45.872622 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:09:45.872638 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:09:45.872655 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:09:45.872674 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 23 01:09:45.872689 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:09:45.878931 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:09:45.879118 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 23 01:09:45.879269 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 23 01:09:45.879290 kernel: acpiphp: Slot [3] registered Jan 23 01:09:45.879305 kernel: acpiphp: Slot [4] registered Jan 23 01:09:45.879327 kernel: acpiphp: Slot [5] registered Jan 23 01:09:45.879341 kernel: acpiphp: Slot [6] registered Jan 23 01:09:45.879356 kernel: acpiphp: Slot [7] registered Jan 23 01:09:45.879371 kernel: acpiphp: Slot [8] registered Jan 23 01:09:45.879386 kernel: acpiphp: Slot [9] registered Jan 23 01:09:45.879401 kernel: acpiphp: Slot [10] registered Jan 23 01:09:45.879418 kernel: acpiphp: Slot [11] registered Jan 23 01:09:45.879433 kernel: acpiphp: Slot [12] registered Jan 23 01:09:45.879449 kernel: acpiphp: Slot [13] registered Jan 23 01:09:45.879468 kernel: acpiphp: Slot [14] registered Jan 23 01:09:45.879483 kernel: acpiphp: Slot [15] registered Jan 23 01:09:45.879499 kernel: acpiphp: Slot [16] registered Jan 23 01:09:45.879514 kernel: acpiphp: Slot [17] registered Jan 23 01:09:45.879530 kernel: acpiphp: Slot [18] registered Jan 23 01:09:45.879545 kernel: acpiphp: Slot [19] registered Jan 23 01:09:45.879561 kernel: acpiphp: Slot [20] registered Jan 23 01:09:45.879576 kernel: acpiphp: Slot [21] registered Jan 23 01:09:45.879591 kernel: acpiphp: Slot [22] registered Jan 23 01:09:45.879607 kernel: acpiphp: Slot [23] registered Jan 23 01:09:45.879626 kernel: acpiphp: Slot [24] registered Jan 23 01:09:45.879641 kernel: acpiphp: Slot [25] registered Jan 23 01:09:45.879656 kernel: acpiphp: Slot [26] registered Jan 23 01:09:45.879671 kernel: acpiphp: Slot [27] registered Jan 23 01:09:45.879686 kernel: acpiphp: Slot [28] registered Jan 23 01:09:45.879700 kernel: acpiphp: Slot [29] registered Jan 23 01:09:45.879716 kernel: acpiphp: Slot [30] registered Jan 23 01:09:45.879730 kernel: acpiphp: Slot [31] registered Jan 23 01:09:45.879763 kernel: PCI host bridge to bus 0000:00 Jan 23 01:09:45.879945 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:09:45.880079 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:09:45.880206 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:09:45.880330 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 23 01:09:45.880452 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 23 01:09:45.880574 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:09:45.880740 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:09:45.881268 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:09:45.881423 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 23 01:09:45.881575 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 23 01:09:45.881738 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 23 01:09:45.881915 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 23 01:09:45.882073 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 23 01:09:45.882234 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 23 01:09:45.882389 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 23 01:09:45.882548 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 23 01:09:45.882699 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:09:45.883890 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 23 01:09:45.884039 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 01:09:45.884170 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:09:45.884332 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 23 01:09:45.884468 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 23 01:09:45.884620 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 23 01:09:45.887209 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 23 01:09:45.887244 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:09:45.887260 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:09:45.887274 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:09:45.887293 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:09:45.887306 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 23 01:09:45.887320 kernel: iommu: Default domain type: Translated Jan 23 01:09:45.887338 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:09:45.887354 kernel: efivars: Registered efivars operations Jan 23 01:09:45.887368 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:09:45.887382 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:09:45.887397 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 23 01:09:45.887413 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 23 01:09:45.887432 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 23 01:09:45.887596 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 23 01:09:45.887740 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 23 01:09:45.887901 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:09:45.887920 kernel: vgaarb: loaded Jan 23 01:09:45.887936 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 23 01:09:45.887951 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 23 01:09:45.887966 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:09:45.887982 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:09:45.888002 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:09:45.888017 kernel: pnp: PnP ACPI init Jan 23 01:09:45.888033 kernel: pnp: PnP ACPI: found 5 devices Jan 23 01:09:45.888049 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:09:45.888064 kernel: NET: Registered PF_INET protocol family Jan 23 01:09:45.888079 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:09:45.888095 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 23 01:09:45.888111 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:09:45.888126 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:09:45.888145 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 23 01:09:45.888160 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 23 01:09:45.888175 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 01:09:45.888190 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 01:09:45.888205 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:09:45.888218 kernel: NET: Registered PF_XDP protocol family Jan 23 01:09:45.888356 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:09:45.888470 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:09:45.888591 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:09:45.888703 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 23 01:09:45.890893 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 23 01:09:45.891051 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 23 01:09:45.891072 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:09:45.891088 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 01:09:45.891105 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 23 01:09:45.891120 kernel: clocksource: Switched to clocksource tsc Jan 23 01:09:45.891135 kernel: Initialise system trusted keyrings Jan 23 01:09:45.891156 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 23 01:09:45.891171 kernel: Key type asymmetric registered Jan 23 01:09:45.891186 kernel: Asymmetric key parser 'x509' registered Jan 23 01:09:45.891201 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:09:45.891218 kernel: io scheduler mq-deadline registered Jan 23 01:09:45.891233 kernel: io scheduler kyber registered Jan 23 01:09:45.891249 kernel: io scheduler bfq registered Jan 23 01:09:45.891264 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:09:45.891280 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:09:45.891298 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:09:45.891314 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:09:45.891330 kernel: i8042: Warning: Keylock active Jan 23 01:09:45.891345 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:09:45.891361 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:09:45.891512 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 23 01:09:45.891645 kernel: rtc_cmos 00:00: registered as rtc0 Jan 23 01:09:45.891797 kernel: rtc_cmos 00:00: setting system clock to 2026-01-23T01:09:45 UTC (1769130585) Jan 23 01:09:45.891919 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 23 01:09:45.891957 kernel: intel_pstate: CPU model not supported Jan 23 01:09:45.891974 kernel: efifb: probing for efifb Jan 23 01:09:45.891990 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 23 01:09:45.892006 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 23 01:09:45.892022 kernel: efifb: scrolling: redraw Jan 23 01:09:45.892037 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 01:09:45.892054 kernel: Console: switching to colour frame buffer device 100x37 Jan 23 01:09:45.892073 kernel: fb0: EFI VGA frame buffer device Jan 23 01:09:45.892089 kernel: pstore: Using crash dump compression: deflate Jan 23 01:09:45.892105 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 01:09:45.892121 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:09:45.892136 kernel: Segment Routing with IPv6 Jan 23 01:09:45.892152 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:09:45.892167 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:09:45.892183 kernel: Key type dns_resolver registered Jan 23 01:09:45.892199 kernel: IPI shorthand broadcast: enabled Jan 23 01:09:45.892215 kernel: sched_clock: Marking stable (2611003741, 145105693)->(2824840808, -68731374) Jan 23 01:09:45.892233 kernel: registered taskstats version 1 Jan 23 01:09:45.892250 kernel: Loading compiled-in X.509 certificates Jan 23 01:09:45.892266 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:09:45.892282 kernel: Demotion targets for Node 0: null Jan 23 01:09:45.892298 kernel: Key type .fscrypt registered Jan 23 01:09:45.892314 kernel: Key type fscrypt-provisioning registered Jan 23 01:09:45.892329 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:09:45.892345 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:09:45.892361 kernel: ima: No architecture policies found Jan 23 01:09:45.892379 kernel: clk: Disabling unused clocks Jan 23 01:09:45.892395 kernel: Warning: unable to open an initial console. Jan 23 01:09:45.892411 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:09:45.892427 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:09:45.892446 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:09:45.892465 kernel: Run /init as init process Jan 23 01:09:45.892481 kernel: with arguments: Jan 23 01:09:45.892497 kernel: /init Jan 23 01:09:45.892512 kernel: with environment: Jan 23 01:09:45.892528 kernel: HOME=/ Jan 23 01:09:45.892544 kernel: TERM=linux Jan 23 01:09:45.892562 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:09:45.892581 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:09:45.892598 systemd[1]: Detected virtualization amazon. Jan 23 01:09:45.892611 systemd[1]: Detected architecture x86-64. Jan 23 01:09:45.892625 systemd[1]: Running in initrd. Jan 23 01:09:45.892639 systemd[1]: No hostname configured, using default hostname. Jan 23 01:09:45.892653 systemd[1]: Hostname set to . Jan 23 01:09:45.892668 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:09:45.892682 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:09:45.892696 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:09:45.892714 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:09:45.892732 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:09:45.893939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:09:45.893967 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:09:45.893987 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:09:45.894005 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:09:45.894031 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:09:45.894049 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:09:45.894066 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:09:45.894085 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:09:45.894104 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:09:45.894123 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:09:45.894141 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:09:45.894161 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:09:45.894182 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:09:45.894204 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:09:45.894222 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:09:45.894240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:09:45.894258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:09:45.894277 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:09:45.894295 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:09:45.894315 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:09:45.894333 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:09:45.894353 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:09:45.894376 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:09:45.894392 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:09:45.894408 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:09:45.894426 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:09:45.894444 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:09:45.894461 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:09:45.894486 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:09:45.894505 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:09:45.894555 systemd-journald[187]: Collecting audit messages is disabled. Jan 23 01:09:45.894605 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:09:45.894625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:45.894645 systemd-journald[187]: Journal started Jan 23 01:09:45.894683 systemd-journald[187]: Runtime Journal (/run/log/journal/ec2d098938540c8189b070e9f9a9f510) is 4.7M, max 38.1M, 33.3M free. Jan 23 01:09:45.895334 systemd-modules-load[189]: Inserted module 'overlay' Jan 23 01:09:45.899795 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:09:45.908771 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:09:45.912983 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:09:45.925285 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:09:45.932929 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:09:45.941364 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:09:45.947846 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 23 01:09:45.948779 kernel: Bridge firewalling registered Jan 23 01:09:45.949952 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:09:45.950962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:09:45.959572 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:09:45.960393 systemd-tmpfiles[210]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:09:45.967958 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:09:45.970854 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:09:45.978940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:09:45.991074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:09:45.995317 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:09:45.998452 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:09:46.008473 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:09:46.058087 systemd-resolved[236]: Positive Trust Anchors: Jan 23 01:09:46.059140 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:09:46.059209 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:09:46.067090 systemd-resolved[236]: Defaulting to hostname 'linux'. Jan 23 01:09:46.069832 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:09:46.070530 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:09:46.105784 kernel: SCSI subsystem initialized Jan 23 01:09:46.115811 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:09:46.126781 kernel: iscsi: registered transport (tcp) Jan 23 01:09:46.148830 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:09:46.148915 kernel: QLogic iSCSI HBA Driver Jan 23 01:09:46.167944 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:09:46.184999 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:09:46.189680 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:09:46.236310 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:09:46.238471 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:09:46.290787 kernel: raid6: avx512x4 gen() 17796 MB/s Jan 23 01:09:46.308775 kernel: raid6: avx512x2 gen() 17730 MB/s Jan 23 01:09:46.326778 kernel: raid6: avx512x1 gen() 17657 MB/s Jan 23 01:09:46.344774 kernel: raid6: avx2x4 gen() 17578 MB/s Jan 23 01:09:46.362778 kernel: raid6: avx2x2 gen() 17097 MB/s Jan 23 01:09:46.381412 kernel: raid6: avx2x1 gen() 13478 MB/s Jan 23 01:09:46.381487 kernel: raid6: using algorithm avx512x4 gen() 17796 MB/s Jan 23 01:09:46.400916 kernel: raid6: .... xor() 7636 MB/s, rmw enabled Jan 23 01:09:46.401008 kernel: raid6: using avx512x2 recovery algorithm Jan 23 01:09:46.421788 kernel: xor: automatically using best checksumming function avx Jan 23 01:09:46.590784 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:09:46.597895 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:09:46.600115 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:09:46.631764 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 23 01:09:46.638436 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:09:46.642498 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:09:46.676551 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jan 23 01:09:46.679081 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 23 01:09:46.708058 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:09:46.710142 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:09:46.808830 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:09:46.813240 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:09:46.904768 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:09:46.917190 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 01:09:46.917492 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 01:09:46.931176 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 01:09:46.931439 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 23 01:09:46.938781 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 23 01:09:46.944824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:09:46.949988 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 01:09:46.950147 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:02:a6:f6:df:57 Jan 23 01:09:46.945430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:46.951252 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:09:46.952725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:09:46.973623 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:09:46.973658 kernel: GPT:9289727 != 33554431 Jan 23 01:09:46.973677 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:09:46.973697 kernel: GPT:9289727 != 33554431 Jan 23 01:09:46.973721 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:09:46.973741 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:09:46.961337 (udev-worker)[496]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:09:46.970321 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:09:46.980794 kernel: AES CTR mode by8 optimization enabled Jan 23 01:09:47.013780 kernel: nvme nvme0: using unchecked data buffer Jan 23 01:09:47.020022 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:47.149174 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 01:09:47.163285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 01:09:47.164203 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:09:47.184514 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 01:09:47.194352 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 01:09:47.195081 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 01:09:47.196564 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:09:47.197701 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:09:47.198795 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:09:47.200493 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:09:47.203492 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:09:47.228636 disk-uuid[668]: Primary Header is updated. Jan 23 01:09:47.228636 disk-uuid[668]: Secondary Entries is updated. Jan 23 01:09:47.228636 disk-uuid[668]: Secondary Header is updated. Jan 23 01:09:47.233489 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:09:47.229922 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:09:48.252535 disk-uuid[674]: The operation has completed successfully. Jan 23 01:09:48.253154 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:09:48.402609 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:09:48.402739 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:09:48.442612 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:09:48.457521 sh[936]: Success Jan 23 01:09:48.478083 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:09:48.478166 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:09:48.478188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:09:48.490826 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jan 23 01:09:48.593445 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:09:48.600860 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:09:48.611873 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:09:48.637783 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (959) Jan 23 01:09:48.642398 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:09:48.642460 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:09:48.753443 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 01:09:48.753514 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:09:48.753527 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:09:48.757818 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:09:48.758933 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:09:48.759641 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:09:48.760663 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:09:48.763139 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:09:48.798808 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (992) Jan 23 01:09:48.802983 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:09:48.803049 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:09:48.822237 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:09:48.822310 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:09:48.831830 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:09:48.833943 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:09:48.836871 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:09:48.874147 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:09:48.877023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:09:48.936978 systemd-networkd[1128]: lo: Link UP Jan 23 01:09:48.936989 systemd-networkd[1128]: lo: Gained carrier Jan 23 01:09:48.938877 systemd-networkd[1128]: Enumeration completed Jan 23 01:09:48.939300 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:09:48.939309 systemd-networkd[1128]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:09:48.939315 systemd-networkd[1128]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:09:48.940265 systemd[1]: Reached target network.target - Network. Jan 23 01:09:48.944218 systemd-networkd[1128]: eth0: Link UP Jan 23 01:09:48.944223 systemd-networkd[1128]: eth0: Gained carrier Jan 23 01:09:48.944241 systemd-networkd[1128]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:09:48.962851 systemd-networkd[1128]: eth0: DHCPv4 address 172.31.21.166/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 01:09:49.160851 ignition[1081]: Ignition 2.22.0 Jan 23 01:09:49.160864 ignition[1081]: Stage: fetch-offline Jan 23 01:09:49.161046 ignition[1081]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:49.161056 ignition[1081]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:09:49.163732 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:09:49.161667 ignition[1081]: Ignition finished successfully Jan 23 01:09:49.165064 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:09:49.212208 ignition[1138]: Ignition 2.22.0 Jan 23 01:09:49.212221 ignition[1138]: Stage: fetch Jan 23 01:09:49.212497 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:49.212506 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:09:49.212583 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:09:49.221899 ignition[1138]: PUT result: OK Jan 23 01:09:49.224399 ignition[1138]: parsed url from cmdline: "" Jan 23 01:09:49.224419 ignition[1138]: no config URL provided Jan 23 01:09:49.224431 ignition[1138]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:09:49.224447 ignition[1138]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:09:49.224480 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:09:49.227325 ignition[1138]: PUT result: OK Jan 23 01:09:49.227412 ignition[1138]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 01:09:49.228333 ignition[1138]: GET result: OK Jan 23 01:09:49.228504 ignition[1138]: parsing config with SHA512: eac59ab5a007bdc7c88385119352c30f7ed977b8c86c285a10c26617ce7b4d433de63d51ba7cc2973601d99240e14c961ce508dd81ab12abcd9a4bee557a36ff Jan 23 01:09:49.236944 unknown[1138]: fetched base config from "system" Jan 23 01:09:49.236961 unknown[1138]: fetched base config from "system" Jan 23 01:09:49.237562 ignition[1138]: fetch: fetch complete Jan 23 01:09:49.236969 unknown[1138]: fetched user config from "aws" Jan 23 01:09:49.237570 ignition[1138]: fetch: fetch passed Jan 23 01:09:49.237633 ignition[1138]: Ignition finished successfully Jan 23 01:09:49.240621 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:09:49.242515 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:09:49.274977 ignition[1145]: Ignition 2.22.0 Jan 23 01:09:49.274992 ignition[1145]: Stage: kargs Jan 23 01:09:49.275378 ignition[1145]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:49.275390 ignition[1145]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:09:49.275514 ignition[1145]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:09:49.276861 ignition[1145]: PUT result: OK Jan 23 01:09:49.279938 ignition[1145]: kargs: kargs passed Jan 23 01:09:49.279994 ignition[1145]: Ignition finished successfully Jan 23 01:09:49.281461 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:09:49.283299 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:09:49.317633 ignition[1152]: Ignition 2.22.0 Jan 23 01:09:49.317648 ignition[1152]: Stage: disks Jan 23 01:09:49.317954 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:49.317962 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:09:49.318066 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:09:49.319014 ignition[1152]: PUT result: OK Jan 23 01:09:49.321890 ignition[1152]: disks: disks passed Jan 23 01:09:49.321945 ignition[1152]: Ignition finished successfully Jan 23 01:09:49.324199 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:09:49.325064 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:09:49.325712 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:09:49.326330 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:09:49.326664 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:09:49.327287 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:09:49.328790 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:09:49.368265 systemd-fsck[1160]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 01:09:49.371530 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:09:49.373929 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:09:49.541791 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:09:49.542249 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:09:49.543131 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:09:49.545906 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:09:49.548832 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:09:49.550115 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:09:49.550781 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:09:49.551505 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:09:49.557660 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:09:49.559448 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:09:49.572793 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1179) Jan 23 01:09:49.575868 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:09:49.575925 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:09:49.584130 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:09:49.584214 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:09:49.586766 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:09:49.744868 initrd-setup-root[1203]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:09:49.763717 initrd-setup-root[1210]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:09:49.768986 initrd-setup-root[1217]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:09:49.776249 initrd-setup-root[1224]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:09:50.016966 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:09:50.019143 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:09:50.021940 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:09:50.044840 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:09:50.047285 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:09:50.073721 systemd-networkd[1128]: eth0: Gained IPv6LL Jan 23 01:09:50.076597 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:09:50.086309 ignition[1291]: INFO : Ignition 2.22.0 Jan 23 01:09:50.087110 ignition[1291]: INFO : Stage: mount Jan 23 01:09:50.087692 ignition[1291]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:50.087692 ignition[1291]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:09:50.087692 ignition[1291]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:09:50.089063 ignition[1291]: INFO : PUT result: OK Jan 23 01:09:50.091764 ignition[1291]: INFO : mount: mount passed Jan 23 01:09:50.092837 ignition[1291]: INFO : Ignition finished successfully Jan 23 01:09:50.093657 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:09:50.095511 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:09:50.121855 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:09:50.156777 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1304) Jan 23 01:09:50.161788 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:09:50.161853 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:09:50.167998 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:09:50.168064 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:09:50.171196 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:09:50.206377 ignition[1320]: INFO : Ignition 2.22.0 Jan 23 01:09:50.206377 ignition[1320]: INFO : Stage: files Jan 23 01:09:50.207723 ignition[1320]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:50.207723 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:09:50.207723 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:09:50.209003 ignition[1320]: INFO : PUT result: OK Jan 23 01:09:50.210538 ignition[1320]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:09:50.211421 ignition[1320]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:09:50.211421 ignition[1320]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:09:50.224138 ignition[1320]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:09:50.224968 ignition[1320]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:09:50.224968 ignition[1320]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:09:50.224522 unknown[1320]: wrote ssh authorized keys file for user: core Jan 23 01:09:50.228963 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 01:09:50.230586 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 01:09:50.302676 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:09:50.507688 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 01:09:50.507688 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 01:09:50.509561 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 01:09:50.697855 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 01:09:50.956810 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 01:09:50.956810 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:09:50.960915 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:09:50.960915 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:09:50.960915 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:09:50.960915 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:09:50.960915 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:09:50.960915 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:09:50.960915 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:09:50.968657 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:09:50.968657 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:09:50.968657 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:09:50.971874 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:09:50.971874 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:09:50.971874 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 01:09:51.216005 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 01:09:51.684945 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:09:51.684945 ignition[1320]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 01:09:51.687377 ignition[1320]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:09:51.690854 ignition[1320]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:09:51.690854 ignition[1320]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 01:09:51.690854 ignition[1320]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:09:51.696900 ignition[1320]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:09:51.696900 ignition[1320]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:09:51.696900 ignition[1320]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:09:51.696900 ignition[1320]: INFO : files: files passed Jan 23 01:09:51.696900 ignition[1320]: INFO : Ignition finished successfully Jan 23 01:09:51.692720 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:09:51.695913 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:09:51.700009 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:09:51.708865 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:09:51.709010 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:09:51.718264 initrd-setup-root-after-ignition[1351]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:09:51.720698 initrd-setup-root-after-ignition[1351]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:09:51.721968 initrd-setup-root-after-ignition[1355]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:09:51.723258 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:09:51.724298 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:09:51.726255 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:09:51.770975 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:09:51.771090 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:09:51.772408 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:09:51.773037 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:09:51.774001 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:09:51.774832 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:09:51.797689 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:09:51.799853 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:09:51.821152 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:09:51.822140 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:09:51.822976 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:09:51.823776 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:09:51.823934 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:09:51.824855 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:09:51.825827 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:09:51.826668 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:09:51.827351 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:09:51.828068 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:09:51.828873 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:09:51.829646 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:09:51.830414 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:09:51.831141 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:09:51.832210 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:09:51.833029 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:09:51.833930 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:09:51.834089 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:09:51.835123 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:09:51.836121 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:09:51.836722 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:09:51.836849 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:09:51.837681 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:09:51.837841 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:09:51.838690 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:09:51.838874 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:09:51.839438 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:09:51.839545 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:09:51.841847 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:09:51.842253 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:09:51.842407 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:09:51.846023 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:09:51.846784 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:09:51.847311 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:09:51.848183 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:09:51.848647 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:09:51.854463 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:09:51.855093 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:09:51.873380 ignition[1375]: INFO : Ignition 2.22.0 Jan 23 01:09:51.873380 ignition[1375]: INFO : Stage: umount Jan 23 01:09:51.873737 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:09:51.875169 ignition[1375]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:51.875169 ignition[1375]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:09:51.876545 ignition[1375]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:09:51.878063 ignition[1375]: INFO : PUT result: OK Jan 23 01:09:51.880537 ignition[1375]: INFO : umount: umount passed Jan 23 01:09:51.880537 ignition[1375]: INFO : Ignition finished successfully Jan 23 01:09:51.884263 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:09:51.885008 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:09:51.887212 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:09:51.887339 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:09:51.888225 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:09:51.888291 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:09:51.889137 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:09:51.889201 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:09:51.889962 systemd[1]: Stopped target network.target - Network. Jan 23 01:09:51.890551 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:09:51.890617 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:09:51.891246 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:09:51.891825 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:09:51.893816 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:09:51.894238 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:09:51.895149 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:09:51.895810 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:09:51.895868 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:09:51.896414 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:09:51.896464 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:09:51.897049 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:09:51.897130 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:09:51.897852 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:09:51.897913 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:09:51.898633 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:09:51.899270 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:09:51.900356 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:09:51.900489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:09:51.902296 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:09:51.902377 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:09:51.906396 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:09:51.906542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:09:51.910812 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:09:51.911150 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:09:51.911290 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:09:51.913619 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:09:51.914717 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:09:51.915174 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:09:51.915231 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:09:51.916888 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:09:51.918317 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:09:51.918390 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:09:51.919015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:09:51.919077 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:09:51.922994 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:09:51.923049 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:09:51.923900 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:09:51.923964 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:09:51.924639 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:09:51.930800 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:09:51.930904 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:09:51.945011 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:09:51.945301 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:09:51.946386 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:09:51.946439 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:09:51.947344 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:09:51.947390 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:09:51.949032 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:09:51.949103 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:09:51.950415 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:09:51.950478 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:09:51.951611 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:09:51.951677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:09:51.953852 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:09:51.954816 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:09:51.954895 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:09:51.957348 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:09:51.957413 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:09:51.960229 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:09:51.960297 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:51.962715 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:09:51.962815 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:09:51.962875 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:09:51.963277 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:09:51.967934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:09:51.975939 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:09:51.976074 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:09:51.977342 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:09:51.979136 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:09:51.997955 systemd[1]: Switching root. Jan 23 01:09:52.028415 systemd-journald[187]: Journal stopped Jan 23 01:09:54.173034 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 23 01:09:54.173131 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:09:54.173152 kernel: SELinux: policy capability open_perms=1 Jan 23 01:09:54.173169 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:09:54.173187 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:09:54.173204 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:09:54.173230 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:09:54.173246 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:09:54.173276 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:09:54.173299 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:09:54.173318 kernel: audit: type=1403 audit(1769130592.821:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:09:54.173340 systemd[1]: Successfully loaded SELinux policy in 68.103ms. Jan 23 01:09:54.173362 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.232ms. Jan 23 01:09:54.173384 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:09:54.173404 systemd[1]: Detected virtualization amazon. Jan 23 01:09:54.173424 systemd[1]: Detected architecture x86-64. Jan 23 01:09:54.173447 systemd[1]: Detected first boot. Jan 23 01:09:54.173466 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:09:54.173489 zram_generator::config[1418]: No configuration found. Jan 23 01:09:54.173511 kernel: Guest personality initialized and is inactive Jan 23 01:09:54.173532 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:09:54.173553 kernel: Initialized host personality Jan 23 01:09:54.173573 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:09:54.173595 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:09:54.173621 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:09:54.173642 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:09:54.173664 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:09:54.173684 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:09:54.173705 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:09:54.173732 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:09:54.173767 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:09:54.173786 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:09:54.173804 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:09:54.173824 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:09:54.173843 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:09:54.173866 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:09:54.173886 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:09:54.173912 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:09:54.173934 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:09:54.173955 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:09:54.173977 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:09:54.173996 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:09:54.174019 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:09:54.174038 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:09:54.174057 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:09:54.174076 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:09:54.174096 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:09:54.174119 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:09:54.174137 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:09:54.174157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:09:54.174179 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:09:54.174201 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:09:54.174223 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:09:54.174243 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:09:54.174271 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:09:54.174292 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:09:54.174313 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:09:54.174335 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:09:54.174356 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:09:54.174376 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:09:54.174397 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:09:54.174421 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:09:54.174443 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:09:54.174464 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:54.174486 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:09:54.174506 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:09:54.174527 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:09:54.174550 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:09:54.174571 systemd[1]: Reached target machines.target - Containers. Jan 23 01:09:54.174595 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:09:54.174618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:09:54.174639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:09:54.174660 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:09:54.174681 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:09:54.174702 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:09:54.174723 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:09:54.174767 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:09:54.174789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:09:54.174811 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:09:54.174831 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:09:54.175108 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:09:54.175142 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:09:54.175162 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:09:54.175184 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:09:54.175206 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:09:54.175227 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:09:54.175252 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:09:54.175272 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:09:54.175292 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:09:54.175312 kernel: loop: module loaded Jan 23 01:09:54.175332 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:09:54.175353 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:09:54.175371 systemd[1]: Stopped verity-setup.service. Jan 23 01:09:54.175391 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:54.175412 kernel: fuse: init (API version 7.41) Jan 23 01:09:54.175431 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:09:54.175452 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:09:54.175471 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:09:54.175490 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:09:54.175508 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:09:54.175527 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:09:54.175546 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:09:54.175564 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:09:54.175583 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:09:54.175603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:09:54.175624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:09:54.175643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:09:54.175661 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:09:54.175681 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:09:54.175700 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:09:54.175720 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:09:54.175739 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:09:54.175808 kernel: ACPI: bus type drm_connector registered Jan 23 01:09:54.175827 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:09:54.175887 systemd-journald[1501]: Collecting audit messages is disabled. Jan 23 01:09:54.175926 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:09:54.175945 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:09:54.175966 systemd-journald[1501]: Journal started Jan 23 01:09:54.176005 systemd-journald[1501]: Runtime Journal (/run/log/journal/ec2d098938540c8189b070e9f9a9f510) is 4.7M, max 38.1M, 33.3M free. Jan 23 01:09:53.783641 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:09:53.792183 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 01:09:53.792683 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:09:54.180819 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:09:54.182819 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:09:54.184190 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:09:54.204148 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:09:54.210926 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:09:54.217867 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:09:54.218777 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:09:54.218920 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:09:54.222239 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:09:54.229446 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:09:54.231974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:09:54.233787 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:09:54.239992 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:09:54.240806 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:09:54.245628 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:09:54.246504 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:09:54.248070 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:09:54.255900 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:09:54.260526 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:09:54.262127 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:09:54.263668 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:09:54.264624 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:09:54.282081 systemd-journald[1501]: Time spent on flushing to /var/log/journal/ec2d098938540c8189b070e9f9a9f510 is 156.430ms for 1018 entries. Jan 23 01:09:54.282081 systemd-journald[1501]: System Journal (/var/log/journal/ec2d098938540c8189b070e9f9a9f510) is 8M, max 195.6M, 187.6M free. Jan 23 01:09:54.455795 systemd-journald[1501]: Received client request to flush runtime journal. Jan 23 01:09:54.455870 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 01:09:54.455895 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:09:54.285163 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:09:54.325646 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:09:54.326642 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:09:54.336212 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:09:54.381614 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:09:54.386212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:09:54.401549 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:09:54.453288 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:09:54.458006 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:09:54.460480 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:09:54.463784 kernel: loop1: detected capacity change from 0 to 72368 Jan 23 01:09:54.508453 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Jan 23 01:09:54.508887 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Jan 23 01:09:54.516780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:09:54.545772 kernel: loop2: detected capacity change from 0 to 224512 Jan 23 01:09:54.698258 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 01:09:54.794944 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:09:54.801768 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 01:09:54.834781 kernel: loop5: detected capacity change from 0 to 72368 Jan 23 01:09:54.861779 kernel: loop6: detected capacity change from 0 to 224512 Jan 23 01:09:54.895770 kernel: loop7: detected capacity change from 0 to 110984 Jan 23 01:09:54.921535 (sd-merge)[1579]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 01:09:54.922297 (sd-merge)[1579]: Merged extensions into '/usr'. Jan 23 01:09:54.936969 systemd[1]: Reload requested from client PID 1551 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:09:54.937109 systemd[1]: Reloading... Jan 23 01:09:55.086788 zram_generator::config[1601]: No configuration found. Jan 23 01:09:55.411263 systemd[1]: Reloading finished in 473 ms. Jan 23 01:09:55.428638 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:09:55.430813 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:09:55.441082 systemd[1]: Starting ensure-sysext.service... Jan 23 01:09:55.444025 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:09:55.446928 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:09:55.483048 systemd[1]: Reload requested from client PID 1657 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:09:55.483069 systemd[1]: Reloading... Jan 23 01:09:55.503997 systemd-udevd[1659]: Using default interface naming scheme 'v255'. Jan 23 01:09:55.509523 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:09:55.510341 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:09:55.510725 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:09:55.511149 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:09:55.512692 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:09:55.513378 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Jan 23 01:09:55.513593 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Jan 23 01:09:55.524232 systemd-tmpfiles[1658]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:09:55.524455 systemd-tmpfiles[1658]: Skipping /boot Jan 23 01:09:55.543830 systemd-tmpfiles[1658]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:09:55.544446 systemd-tmpfiles[1658]: Skipping /boot Jan 23 01:09:55.593025 ldconfig[1546]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:09:55.618793 zram_generator::config[1689]: No configuration found. Jan 23 01:09:55.858935 (udev-worker)[1703]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:09:55.936828 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 01:09:55.942572 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:09:55.942648 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 23 01:09:55.944772 kernel: ACPI: button: Sleep Button [SLPF] Jan 23 01:09:55.956794 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:09:56.052770 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 23 01:09:56.137529 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:09:56.138218 systemd[1]: Reloading finished in 654 ms. Jan 23 01:09:56.152172 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:09:56.154280 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:09:56.166822 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:09:56.188011 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:09:56.194129 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:09:56.196984 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:09:56.203830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:09:56.210427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:09:56.214661 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:09:56.235644 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:56.235955 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:09:56.240048 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:09:56.245945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:09:56.263598 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:09:56.264952 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:09:56.265146 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:09:56.265307 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:56.274863 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:09:56.278601 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:56.278915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:09:56.279156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:09:56.279297 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:09:56.279439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:56.289396 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:56.289797 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:09:56.295084 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:09:56.296051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:09:56.296415 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:09:56.296686 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:09:56.297948 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:56.311818 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:09:56.321435 systemd[1]: Finished ensure-sysext.service. Jan 23 01:09:56.355578 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:09:56.360815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:09:56.364128 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:09:56.367582 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:09:56.367859 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:09:56.371648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:09:56.374285 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:09:56.375931 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:09:56.377820 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:09:56.383893 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:09:56.385606 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:09:56.389895 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:09:56.405203 augenrules[1899]: No rules Jan 23 01:09:56.413969 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:09:56.414262 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:09:56.442816 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:09:56.443621 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:09:56.458854 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:09:56.508075 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 01:09:56.509961 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:09:56.513981 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:09:56.586271 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:09:56.602468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:09:56.633108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:09:56.635030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:56.641919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:09:56.749333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:56.767363 systemd-resolved[1859]: Positive Trust Anchors: Jan 23 01:09:56.767389 systemd-resolved[1859]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:09:56.767435 systemd-resolved[1859]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:09:56.768696 systemd-networkd[1858]: lo: Link UP Jan 23 01:09:56.768706 systemd-networkd[1858]: lo: Gained carrier Jan 23 01:09:56.770489 systemd-networkd[1858]: Enumeration completed Jan 23 01:09:56.770628 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:09:56.771274 systemd-networkd[1858]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:09:56.771287 systemd-networkd[1858]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:09:56.773926 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:09:56.776546 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:09:56.776708 systemd-resolved[1859]: Defaulting to hostname 'linux'. Jan 23 01:09:56.777489 systemd-networkd[1858]: eth0: Link UP Jan 23 01:09:56.777685 systemd-networkd[1858]: eth0: Gained carrier Jan 23 01:09:56.777724 systemd-networkd[1858]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:09:56.779634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:09:56.781067 systemd[1]: Reached target network.target - Network. Jan 23 01:09:56.782290 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:09:56.782869 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:09:56.783950 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:09:56.784881 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:09:56.785836 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:09:56.787007 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:09:56.787984 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:09:56.788645 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:09:56.789275 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:09:56.789412 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:09:56.790021 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:09:56.790830 systemd-networkd[1858]: eth0: DHCPv4 address 172.31.21.166/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 01:09:56.792415 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:09:56.795994 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:09:56.799879 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:09:56.800494 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:09:56.800955 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:09:56.803326 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:09:56.804112 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:09:56.805331 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:09:56.806697 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:09:56.807262 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:09:56.807896 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:09:56.807933 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:09:56.809261 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:09:56.813956 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:09:56.819105 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:09:56.823017 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:09:56.825526 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:09:56.830296 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:09:56.831923 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:09:56.842996 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:09:56.847024 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:09:56.847556 jq[1944]: false Jan 23 01:09:56.852584 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:09:56.859969 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:09:56.863982 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 01:09:56.874343 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:09:56.891056 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:09:56.903467 oslogin_cache_refresh[1946]: Refreshing passwd entry cache Jan 23 01:09:56.905730 google_oslogin_nss_cache[1946]: oslogin_cache_refresh[1946]: Refreshing passwd entry cache Jan 23 01:09:56.910024 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:09:56.913437 oslogin_cache_refresh[1946]: Failure getting users, quitting Jan 23 01:09:56.917864 google_oslogin_nss_cache[1946]: oslogin_cache_refresh[1946]: Failure getting users, quitting Jan 23 01:09:56.917864 google_oslogin_nss_cache[1946]: oslogin_cache_refresh[1946]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:09:56.917864 google_oslogin_nss_cache[1946]: oslogin_cache_refresh[1946]: Refreshing group entry cache Jan 23 01:09:56.913878 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:09:56.913458 oslogin_cache_refresh[1946]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:09:56.914614 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:09:56.913514 oslogin_cache_refresh[1946]: Refreshing group entry cache Jan 23 01:09:56.926806 google_oslogin_nss_cache[1946]: oslogin_cache_refresh[1946]: Failure getting groups, quitting Jan 23 01:09:56.926806 google_oslogin_nss_cache[1946]: oslogin_cache_refresh[1946]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:09:56.922157 oslogin_cache_refresh[1946]: Failure getting groups, quitting Jan 23 01:09:56.918894 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:09:56.922173 oslogin_cache_refresh[1946]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:09:56.922409 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:09:56.926614 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:09:56.930566 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:09:56.932560 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:09:56.932919 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:09:56.944370 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:09:56.944721 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:09:56.954787 extend-filesystems[1945]: Found /dev/nvme0n1p6 Jan 23 01:09:56.963877 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:09:56.965985 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:09:57.001002 jq[1960]: true Jan 23 01:09:57.002342 extend-filesystems[1945]: Found /dev/nvme0n1p9 Jan 23 01:09:57.016891 extend-filesystems[1945]: Checking size of /dev/nvme0n1p9 Jan 23 01:09:57.023249 (ntainerd)[1975]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:09:57.073799 update_engine[1959]: I20260123 01:09:57.070472 1959 main.cc:92] Flatcar Update Engine starting Jan 23 01:09:57.077030 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 01:09:57.084425 ntpd[1948]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:09:57.085051 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:09:57.085051 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:09:57.085051 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: ---------------------------------------------------- Jan 23 01:09:57.085051 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:09:57.085051 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:09:57.085051 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: corporation. Support and training for ntp-4 are Jan 23 01:09:57.085051 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: available at https://www.nwtime.org/support Jan 23 01:09:57.085051 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: ---------------------------------------------------- Jan 23 01:09:57.084500 ntpd[1948]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:09:57.084511 ntpd[1948]: ---------------------------------------------------- Jan 23 01:09:57.084520 ntpd[1948]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:09:57.084528 ntpd[1948]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:09:57.084538 ntpd[1948]: corporation. Support and training for ntp-4 are Jan 23 01:09:57.084546 ntpd[1948]: available at https://www.nwtime.org/support Jan 23 01:09:57.084555 ntpd[1948]: ---------------------------------------------------- Jan 23 01:09:57.088534 ntpd[1948]: proto: precision = 0.085 usec (-23) Jan 23 01:09:57.091884 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: proto: precision = 0.085 usec (-23) Jan 23 01:09:57.091884 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: basedate set to 2026-01-10 Jan 23 01:09:57.091884 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: gps base set to 2026-01-11 (week 2401) Jan 23 01:09:57.091884 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:09:57.091884 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:09:57.091884 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:09:57.091884 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: Listen normally on 3 eth0 172.31.21.166:123 Jan 23 01:09:57.091884 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: Listen normally on 4 lo [::1]:123 Jan 23 01:09:57.091884 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: bind(21) AF_INET6 [fe80::402:a6ff:fef6:df57%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:09:57.091884 ntpd[1948]: 23 Jan 01:09:57 ntpd[1948]: unable to create socket on eth0 (5) for [fe80::402:a6ff:fef6:df57%2]:123 Jan 23 01:09:57.091036 ntpd[1948]: basedate set to 2026-01-10 Jan 23 01:09:57.091055 ntpd[1948]: gps base set to 2026-01-11 (week 2401) Jan 23 01:09:57.091193 ntpd[1948]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:09:57.091220 ntpd[1948]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:09:57.091426 ntpd[1948]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:09:57.091453 ntpd[1948]: Listen normally on 3 eth0 172.31.21.166:123 Jan 23 01:09:57.092477 kernel: ntpd[1948]: segfault at 24 ip 0000562f68eb5aeb sp 00007ffcce8e9fa0 error 4 in ntpd[68aeb,562f68e53000+80000] likely on CPU 0 (core 0, socket 0) Jan 23 01:09:57.091480 ntpd[1948]: Listen normally on 4 lo [::1]:123 Jan 23 01:09:57.091508 ntpd[1948]: bind(21) AF_INET6 [fe80::402:a6ff:fef6:df57%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:09:57.091526 ntpd[1948]: unable to create socket on eth0 (5) for [fe80::402:a6ff:fef6:df57%2]:123 Jan 23 01:09:57.100914 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 01:09:57.110908 tar[1964]: linux-amd64/LICENSE Jan 23 01:09:57.110908 tar[1964]: linux-amd64/helm Jan 23 01:09:57.111319 jq[1984]: true Jan 23 01:09:57.136984 systemd-coredump[2002]: Process 1948 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 01:09:57.140950 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 01:09:57.143077 extend-filesystems[1945]: Resized partition /dev/nvme0n1p9 Jan 23 01:09:57.154184 systemd[1]: Started systemd-coredump@0-2002-0.service - Process Core Dump (PID 2002/UID 0). Jan 23 01:09:57.157710 dbus-daemon[1942]: [system] SELinux support is enabled Jan 23 01:09:57.157913 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:09:57.163865 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:09:57.166000 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:09:57.166922 extend-filesystems[2004]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:09:57.170234 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:09:57.170291 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:09:57.171009 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:09:57.171033 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:09:57.185780 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 01:09:57.188528 dbus-daemon[1942]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1858 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 01:09:57.193040 coreos-metadata[1941]: Jan 23 01:09:57.190 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 01:09:57.204187 update_engine[1959]: I20260123 01:09:57.199331 1959 update_check_scheduler.cc:74] Next update check in 8m52s Jan 23 01:09:57.199856 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 01:09:57.200954 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:09:57.205838 coreos-metadata[1941]: Jan 23 01:09:57.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 01:09:57.205838 coreos-metadata[1941]: Jan 23 01:09:57.205 INFO Fetch successful Jan 23 01:09:57.205838 coreos-metadata[1941]: Jan 23 01:09:57.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 01:09:57.207660 coreos-metadata[1941]: Jan 23 01:09:57.207 INFO Fetch successful Jan 23 01:09:57.207660 coreos-metadata[1941]: Jan 23 01:09:57.207 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 01:09:57.211150 coreos-metadata[1941]: Jan 23 01:09:57.210 INFO Fetch successful Jan 23 01:09:57.211150 coreos-metadata[1941]: Jan 23 01:09:57.210 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 01:09:57.218037 coreos-metadata[1941]: Jan 23 01:09:57.215 INFO Fetch successful Jan 23 01:09:57.218037 coreos-metadata[1941]: Jan 23 01:09:57.215 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 01:09:57.218037 coreos-metadata[1941]: Jan 23 01:09:57.217 INFO Fetch failed with 404: resource not found Jan 23 01:09:57.218037 coreos-metadata[1941]: Jan 23 01:09:57.217 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 01:09:57.215967 systemd-logind[1956]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 01:09:57.216005 systemd-logind[1956]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 23 01:09:57.216032 systemd-logind[1956]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:09:57.217560 systemd-logind[1956]: New seat seat0. Jan 23 01:09:57.222139 coreos-metadata[1941]: Jan 23 01:09:57.219 INFO Fetch successful Jan 23 01:09:57.222139 coreos-metadata[1941]: Jan 23 01:09:57.219 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 01:09:57.229414 coreos-metadata[1941]: Jan 23 01:09:57.227 INFO Fetch successful Jan 23 01:09:57.229414 coreos-metadata[1941]: Jan 23 01:09:57.227 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 01:09:57.229414 coreos-metadata[1941]: Jan 23 01:09:57.229 INFO Fetch successful Jan 23 01:09:57.229414 coreos-metadata[1941]: Jan 23 01:09:57.229 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 01:09:57.232853 coreos-metadata[1941]: Jan 23 01:09:57.232 INFO Fetch successful Jan 23 01:09:57.232853 coreos-metadata[1941]: Jan 23 01:09:57.232 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 01:09:57.244632 coreos-metadata[1941]: Jan 23 01:09:57.241 INFO Fetch successful Jan 23 01:09:57.261269 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:09:57.262594 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:09:57.294774 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 01:09:57.311778 extend-filesystems[2004]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 01:09:57.311778 extend-filesystems[2004]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 01:09:57.311778 extend-filesystems[2004]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 01:09:57.330887 extend-filesystems[1945]: Resized filesystem in /dev/nvme0n1p9 Jan 23 01:09:57.313230 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:09:57.325594 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:09:57.348793 bash[2029]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:09:57.347320 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:09:57.364930 systemd[1]: Starting sshkeys.service... Jan 23 01:09:57.411614 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:09:57.413002 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:09:57.416104 systemd-coredump[2003]: Process 1948 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1948: #0 0x0000562f68eb5aeb n/a (ntpd + 0x68aeb) #1 0x0000562f68e5ecdf n/a (ntpd + 0x11cdf) #2 0x0000562f68e5f575 n/a (ntpd + 0x12575) #3 0x0000562f68e5ad8a n/a (ntpd + 0xdd8a) #4 0x0000562f68e5c5d3 n/a (ntpd + 0xf5d3) #5 0x0000562f68e64fd1 n/a (ntpd + 0x17fd1) #6 0x0000562f68e55c2d n/a (ntpd + 0x8c2d) #7 0x00007f21690b216c n/a (libc.so.6 + 0x2716c) #8 0x00007f21690b2229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000562f68e55c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 01:09:57.421995 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 01:09:57.422181 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 01:09:57.441694 systemd[1]: systemd-coredump@0-2002-0.service: Deactivated successfully. Jan 23 01:09:57.472711 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:09:57.478103 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:09:57.536389 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 01:09:57.547475 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:09:57.663663 ntpd[2066]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:09:57.663760 ntpd[2066]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:09:57.664100 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:09:57.666824 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:09:57.666824 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: ---------------------------------------------------- Jan 23 01:09:57.666824 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:09:57.666824 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:09:57.666824 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: corporation. Support and training for ntp-4 are Jan 23 01:09:57.666824 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: available at https://www.nwtime.org/support Jan 23 01:09:57.666824 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: ---------------------------------------------------- Jan 23 01:09:57.666033 ntpd[2066]: ---------------------------------------------------- Jan 23 01:09:57.666045 ntpd[2066]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:09:57.666055 ntpd[2066]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:09:57.666064 ntpd[2066]: corporation. Support and training for ntp-4 are Jan 23 01:09:57.666073 ntpd[2066]: available at https://www.nwtime.org/support Jan 23 01:09:57.666082 ntpd[2066]: ---------------------------------------------------- Jan 23 01:09:57.679791 kernel: ntpd[2066]: segfault at 24 ip 00005642505e0aeb sp 00007ffea944a340 error 4 in ntpd[68aeb,56425057e000+80000] likely on CPU 0 (core 0, socket 0) Jan 23 01:09:57.679876 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 01:09:57.674977 ntpd[2066]: proto: precision = 0.065 usec (-24) Jan 23 01:09:57.679976 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: proto: precision = 0.065 usec (-24) Jan 23 01:09:57.679976 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: basedate set to 2026-01-10 Jan 23 01:09:57.679976 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: gps base set to 2026-01-11 (week 2401) Jan 23 01:09:57.679976 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:09:57.679976 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:09:57.679976 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:09:57.679976 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: Listen normally on 3 eth0 172.31.21.166:123 Jan 23 01:09:57.679976 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: Listen normally on 4 lo [::1]:123 Jan 23 01:09:57.679976 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: bind(21) AF_INET6 [fe80::402:a6ff:fef6:df57%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:09:57.679976 ntpd[2066]: 23 Jan 01:09:57 ntpd[2066]: unable to create socket on eth0 (5) for [fe80::402:a6ff:fef6:df57%2]:123 Jan 23 01:09:57.675256 ntpd[2066]: basedate set to 2026-01-10 Jan 23 01:09:57.675270 ntpd[2066]: gps base set to 2026-01-11 (week 2401) Jan 23 01:09:57.675366 ntpd[2066]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:09:57.675394 ntpd[2066]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:09:57.675587 ntpd[2066]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:09:57.675615 ntpd[2066]: Listen normally on 3 eth0 172.31.21.166:123 Jan 23 01:09:57.675644 ntpd[2066]: Listen normally on 4 lo [::1]:123 Jan 23 01:09:57.675672 ntpd[2066]: bind(21) AF_INET6 [fe80::402:a6ff:fef6:df57%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:09:57.675692 ntpd[2066]: unable to create socket on eth0 (5) for [fe80::402:a6ff:fef6:df57%2]:123 Jan 23 01:09:57.710205 systemd-coredump[2089]: Process 2066 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 01:09:57.721245 systemd[1]: Started systemd-coredump@1-2089-0.service - Process Core Dump (PID 2089/UID 0). Jan 23 01:09:57.747914 coreos-metadata[2053]: Jan 23 01:09:57.747 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 01:09:57.750330 coreos-metadata[2053]: Jan 23 01:09:57.750 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 01:09:57.761124 coreos-metadata[2053]: Jan 23 01:09:57.760 INFO Fetch successful Jan 23 01:09:57.761124 coreos-metadata[2053]: Jan 23 01:09:57.760 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 01:09:57.764097 coreos-metadata[2053]: Jan 23 01:09:57.763 INFO Fetch successful Jan 23 01:09:57.770939 unknown[2053]: wrote ssh authorized keys file for user: core Jan 23 01:09:57.857874 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 01:09:57.871272 dbus-daemon[1942]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 01:09:57.873689 dbus-daemon[1942]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2008 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 01:09:57.889055 update-ssh-keys[2115]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:09:57.882926 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:09:57.888054 systemd[1]: Finished sshkeys.service. Jan 23 01:09:57.905983 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 01:09:57.927963 systemd-coredump[2104]: Process 2066 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2066: #0 0x00005642505e0aeb n/a (ntpd + 0x68aeb) #1 0x0000564250589cdf n/a (ntpd + 0x11cdf) #2 0x000056425058a575 n/a (ntpd + 0x12575) #3 0x0000564250585d8a n/a (ntpd + 0xdd8a) #4 0x00005642505875d3 n/a (ntpd + 0xf5d3) #5 0x000056425058ffd1 n/a (ntpd + 0x17fd1) #6 0x0000564250580c2d n/a (ntpd + 0x8c2d) #7 0x00007fe3158bb16c n/a (libc.so.6 + 0x2716c) #8 0x00007fe3158bb229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000564250580c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 01:09:57.934722 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 01:09:57.936064 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 01:09:57.943551 systemd[1]: systemd-coredump@1-2089-0.service: Deactivated successfully. Jan 23 01:09:57.968596 locksmithd[2015]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:09:57.991528 containerd[1975]: time="2026-01-23T01:09:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:09:57.997283 containerd[1975]: time="2026-01-23T01:09:57.997232467Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:09:58.004774 sshd_keygen[1980]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:09:58.066217 containerd[1975]: time="2026-01-23T01:09:58.066042119Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.418µs" Jan 23 01:09:58.066217 containerd[1975]: time="2026-01-23T01:09:58.066097193Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:09:58.066217 containerd[1975]: time="2026-01-23T01:09:58.066123734Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:09:58.066412 containerd[1975]: time="2026-01-23T01:09:58.066311745Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:09:58.066412 containerd[1975]: time="2026-01-23T01:09:58.066332559Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:09:58.066412 containerd[1975]: time="2026-01-23T01:09:58.066364389Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:09:58.066531 containerd[1975]: time="2026-01-23T01:09:58.066434539Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:09:58.066531 containerd[1975]: time="2026-01-23T01:09:58.066450041Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:09:58.069342 containerd[1975]: time="2026-01-23T01:09:58.069289685Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:09:58.069342 containerd[1975]: time="2026-01-23T01:09:58.069337558Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:09:58.069503 containerd[1975]: time="2026-01-23T01:09:58.069360053Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:09:58.069503 containerd[1975]: time="2026-01-23T01:09:58.069372083Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:09:58.069503 containerd[1975]: time="2026-01-23T01:09:58.069498151Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:09:58.073586 containerd[1975]: time="2026-01-23T01:09:58.071795844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:09:58.073586 containerd[1975]: time="2026-01-23T01:09:58.071875397Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:09:58.073586 containerd[1975]: time="2026-01-23T01:09:58.071897321Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:09:58.073586 containerd[1975]: time="2026-01-23T01:09:58.071937268Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:09:58.073586 containerd[1975]: time="2026-01-23T01:09:58.072253992Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:09:58.073586 containerd[1975]: time="2026-01-23T01:09:58.072367875Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.078873579Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.078969656Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.078992462Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079055005Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079073535Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079089735Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079120301Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079140521Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079159532Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079176353Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079193381Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079214457Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079360802Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:09:58.080428 containerd[1975]: time="2026-01-23T01:09:58.079385581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079405359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079440805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079463042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079482099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079501023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079517513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079535656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079551700Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079588076Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079648501Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.079665941Z" level=info msg="Start snapshots syncer" Jan 23 01:09:58.080979 containerd[1975]: time="2026-01-23T01:09:58.080892213Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:09:58.087318 containerd[1975]: time="2026-01-23T01:09:58.081388472Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:09:58.087318 containerd[1975]: time="2026-01-23T01:09:58.081464897Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:09:58.086340 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083179951Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083398180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083431954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083452964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083467857Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083487119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083502204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083516609Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083555400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083570698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083583712Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083636954Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083661001Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:09:58.087690 containerd[1975]: time="2026-01-23T01:09:58.083728445Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:09:58.094606 containerd[1975]: time="2026-01-23T01:09:58.083757339Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:09:58.094606 containerd[1975]: time="2026-01-23T01:09:58.083778333Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:09:58.094606 containerd[1975]: time="2026-01-23T01:09:58.083792151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:09:58.094606 containerd[1975]: time="2026-01-23T01:09:58.083813906Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:09:58.094606 containerd[1975]: time="2026-01-23T01:09:58.083834906Z" level=info msg="runtime interface created" Jan 23 01:09:58.094606 containerd[1975]: time="2026-01-23T01:09:58.083842507Z" level=info msg="created NRI interface" Jan 23 01:09:58.094606 containerd[1975]: time="2026-01-23T01:09:58.083854489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:09:58.094606 containerd[1975]: time="2026-01-23T01:09:58.083874804Z" level=info msg="Connect containerd service" Jan 23 01:09:58.094606 containerd[1975]: time="2026-01-23T01:09:58.083907374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:09:58.094606 containerd[1975]: time="2026-01-23T01:09:58.085710685Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:09:58.091988 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Jan 23 01:09:58.098645 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:09:58.121177 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:09:58.136836 systemd-networkd[1858]: eth0: Gained IPv6LL Jan 23 01:09:58.153692 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:09:58.183109 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:09:58.191147 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 01:09:58.199339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:09:58.206159 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:09:58.215651 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:09:58.215970 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:09:58.223299 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:09:58.245821 ntpd[2169]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:09:58.246305 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:09:58.246534 ntpd[2169]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:09:58.246772 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:09:58.246843 ntpd[2169]: ---------------------------------------------------- Jan 23 01:09:58.246913 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: ---------------------------------------------------- Jan 23 01:09:58.246964 ntpd[2169]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:09:58.247034 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:09:58.247086 ntpd[2169]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:09:58.247154 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:09:58.247209 ntpd[2169]: corporation. Support and training for ntp-4 are Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: corporation. Support and training for ntp-4 are Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: available at https://www.nwtime.org/support Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: ---------------------------------------------------- Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: proto: precision = 0.061 usec (-24) Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: basedate set to 2026-01-10 Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: gps base set to 2026-01-11 (week 2401) Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: Listen normally on 3 eth0 172.31.21.166:123 Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: Listen normally on 4 lo [::1]:123 Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: Listen normally on 5 eth0 [fe80::402:a6ff:fef6:df57%2]:123 Jan 23 01:09:58.248868 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: Listening on routing socket on fd #22 for interface updates Jan 23 01:09:58.247223 ntpd[2169]: available at https://www.nwtime.org/support Jan 23 01:09:58.247233 ntpd[2169]: ---------------------------------------------------- Jan 23 01:09:58.247992 ntpd[2169]: proto: precision = 0.061 usec (-24) Jan 23 01:09:58.248257 ntpd[2169]: basedate set to 2026-01-10 Jan 23 01:09:58.248270 ntpd[2169]: gps base set to 2026-01-11 (week 2401) Jan 23 01:09:58.248366 ntpd[2169]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:09:58.248393 ntpd[2169]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:09:58.248576 ntpd[2169]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:09:58.248604 ntpd[2169]: Listen normally on 3 eth0 172.31.21.166:123 Jan 23 01:09:58.248635 ntpd[2169]: Listen normally on 4 lo [::1]:123 Jan 23 01:09:58.248663 ntpd[2169]: Listen normally on 5 eth0 [fe80::402:a6ff:fef6:df57%2]:123 Jan 23 01:09:58.248689 ntpd[2169]: Listening on routing socket on fd #22 for interface updates Jan 23 01:09:58.252624 ntpd[2169]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:09:58.252780 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:09:58.252849 ntpd[2169]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:09:58.252917 ntpd[2169]: 23 Jan 01:09:58 ntpd[2169]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:09:58.285223 polkitd[2130]: Started polkitd version 126 Jan 23 01:09:58.320189 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:09:58.323509 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:09:58.332601 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:09:58.333667 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:09:58.347346 polkitd[2130]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 01:09:58.355861 polkitd[2130]: Loading rules from directory /run/polkit-1/rules.d Jan 23 01:09:58.355936 polkitd[2130]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:09:58.356390 polkitd[2130]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 01:09:58.356422 polkitd[2130]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:09:58.356470 polkitd[2130]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 01:09:58.361894 polkitd[2130]: Finished loading, compiling and executing 2 rules Jan 23 01:09:58.363550 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 01:09:58.366452 dbus-daemon[1942]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 01:09:58.370052 polkitd[2130]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 01:09:58.390352 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:09:58.415954 systemd-hostnamed[2008]: Hostname set to (transient) Jan 23 01:09:58.416087 systemd-resolved[1859]: System hostname changed to 'ip-172-31-21-166'. Jan 23 01:09:58.420582 amazon-ssm-agent[2178]: Initializing new seelog logger Jan 23 01:09:58.421320 amazon-ssm-agent[2178]: New Seelog Logger Creation Complete Jan 23 01:09:58.421414 amazon-ssm-agent[2178]: 2026/01/23 01:09:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:09:58.421414 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:09:58.421883 amazon-ssm-agent[2178]: 2026/01/23 01:09:58 processing appconfig overrides Jan 23 01:09:58.422866 amazon-ssm-agent[2178]: 2026/01/23 01:09:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:09:58.422866 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:09:58.422977 amazon-ssm-agent[2178]: 2026/01/23 01:09:58 processing appconfig overrides Jan 23 01:09:58.423293 amazon-ssm-agent[2178]: 2026/01/23 01:09:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:09:58.423293 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:09:58.423377 amazon-ssm-agent[2178]: 2026/01/23 01:09:58 processing appconfig overrides Jan 23 01:09:58.423865 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4222 INFO Proxy environment variables: Jan 23 01:09:58.439437 amazon-ssm-agent[2178]: 2026/01/23 01:09:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:09:58.439437 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:09:58.439596 amazon-ssm-agent[2178]: 2026/01/23 01:09:58 processing appconfig overrides Jan 23 01:09:58.522263 containerd[1975]: time="2026-01-23T01:09:58.522212564Z" level=info msg="Start subscribing containerd event" Jan 23 01:09:58.522534 containerd[1975]: time="2026-01-23T01:09:58.522492906Z" level=info msg="Start recovering state" Jan 23 01:09:58.524027 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4228 INFO no_proxy: Jan 23 01:09:58.524174 containerd[1975]: time="2026-01-23T01:09:58.524151815Z" level=info msg="Start event monitor" Jan 23 01:09:58.524226 containerd[1975]: time="2026-01-23T01:09:58.524181021Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:09:58.524226 containerd[1975]: time="2026-01-23T01:09:58.524191664Z" level=info msg="Start streaming server" Jan 23 01:09:58.524226 containerd[1975]: time="2026-01-23T01:09:58.524202904Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:09:58.524226 containerd[1975]: time="2026-01-23T01:09:58.524212882Z" level=info msg="runtime interface starting up..." Jan 23 01:09:58.524226 containerd[1975]: time="2026-01-23T01:09:58.524221012Z" level=info msg="starting plugins..." Jan 23 01:09:58.524386 containerd[1975]: time="2026-01-23T01:09:58.524238304Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:09:58.526362 containerd[1975]: time="2026-01-23T01:09:58.526321601Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:09:58.526446 containerd[1975]: time="2026-01-23T01:09:58.526397403Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:09:58.526559 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:09:58.528720 containerd[1975]: time="2026-01-23T01:09:58.528683209Z" level=info msg="containerd successfully booted in 0.537688s" Jan 23 01:09:58.623830 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4228 INFO https_proxy: Jan 23 01:09:58.722163 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4228 INFO http_proxy: Jan 23 01:09:58.771013 tar[1964]: linux-amd64/README.md Jan 23 01:09:58.795508 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:09:58.826077 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4229 INFO Checking if agent identity type OnPrem can be assumed Jan 23 01:09:58.928268 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4231 INFO Checking if agent identity type EC2 can be assumed Jan 23 01:09:59.028921 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4960 INFO Agent will take identity from EC2 Jan 23 01:09:59.128107 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4997 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 01:09:59.227489 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4997 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 23 01:09:59.326906 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4997 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 01:09:59.427235 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4997 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 01:09:59.481409 amazon-ssm-agent[2178]: 2026/01/23 01:09:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:09:59.481409 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:09:59.481409 amazon-ssm-agent[2178]: 2026/01/23 01:09:59 processing appconfig overrides Jan 23 01:09:59.517070 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.4997 INFO [Registrar] Starting registrar module Jan 23 01:09:59.517070 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.5017 INFO [EC2Identity] Checking disk for registration info Jan 23 01:09:59.517070 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.5018 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 01:09:59.517070 amazon-ssm-agent[2178]: 2026-01-23 01:09:58.5018 INFO [EC2Identity] Generating registration keypair Jan 23 01:09:59.517070 amazon-ssm-agent[2178]: 2026-01-23 01:09:59.4290 INFO [EC2Identity] Checking write access before registering Jan 23 01:09:59.517070 amazon-ssm-agent[2178]: 2026-01-23 01:09:59.4294 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 01:09:59.517070 amazon-ssm-agent[2178]: 2026-01-23 01:09:59.4807 INFO [EC2Identity] EC2 registration was successful. Jan 23 01:09:59.517850 amazon-ssm-agent[2178]: 2026-01-23 01:09:59.4808 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 01:09:59.517850 amazon-ssm-agent[2178]: 2026-01-23 01:09:59.4809 INFO [CredentialRefresher] credentialRefresher has started Jan 23 01:09:59.517850 amazon-ssm-agent[2178]: 2026-01-23 01:09:59.4809 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 01:09:59.517850 amazon-ssm-agent[2178]: 2026-01-23 01:09:59.5165 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 01:09:59.517850 amazon-ssm-agent[2178]: 2026-01-23 01:09:59.5170 INFO [CredentialRefresher] Credentials ready Jan 23 01:09:59.528470 amazon-ssm-agent[2178]: 2026-01-23 01:09:59.5171 INFO [CredentialRefresher] Next credential rotation will be in 29.999990425766665 minutes Jan 23 01:09:59.805454 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:09:59.823113 systemd[1]: Started sshd@0-172.31.21.166:22-68.220.241.50:44632.service - OpenSSH per-connection server daemon (68.220.241.50:44632). Jan 23 01:10:00.767712 amazon-ssm-agent[2178]: 2026-01-23 01:10:00.7419 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 01:10:00.868373 amazon-ssm-agent[2178]: 2026-01-23 01:10:00.7767 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2239) started Jan 23 01:10:00.969509 amazon-ssm-agent[2178]: 2026-01-23 01:10:00.7767 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 01:10:00.994908 sshd[2233]: Accepted publickey for core from 68.220.241.50 port 44632 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:01.003621 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:01.051275 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:10:01.073243 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:10:01.098521 systemd-logind[1956]: New session 1 of user core. Jan 23 01:10:01.144209 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:10:01.162497 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:10:01.205600 (systemd)[2246]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:10:01.219580 systemd-logind[1956]: New session c1 of user core. Jan 23 01:10:01.978939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:01.998715 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:10:02.021893 systemd[2246]: Queued start job for default target default.target. Jan 23 01:10:02.029258 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:10:02.035739 systemd[2246]: Created slice app.slice - User Application Slice. Jan 23 01:10:02.035955 systemd[2246]: Reached target paths.target - Paths. Jan 23 01:10:02.036393 systemd[2246]: Reached target timers.target - Timers. Jan 23 01:10:02.044316 systemd[2246]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:10:02.101767 systemd[2246]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:10:02.119479 systemd[2246]: Reached target sockets.target - Sockets. Jan 23 01:10:02.119602 systemd[2246]: Reached target basic.target - Basic System. Jan 23 01:10:02.119664 systemd[2246]: Reached target default.target - Main User Target. Jan 23 01:10:02.119706 systemd[2246]: Startup finished in 865ms. Jan 23 01:10:02.119837 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:10:02.132046 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:10:02.133363 systemd[1]: Startup finished in 2.666s (kernel) + 7.182s (initrd) + 9.376s (userspace) = 19.226s. Jan 23 01:10:02.573051 systemd[1]: Started sshd@1-172.31.21.166:22-68.220.241.50:50270.service - OpenSSH per-connection server daemon (68.220.241.50:50270). Jan 23 01:10:03.091807 sshd[2273]: Accepted publickey for core from 68.220.241.50 port 50270 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:03.097113 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:03.108302 systemd-logind[1956]: New session 2 of user core. Jan 23 01:10:03.116186 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:10:03.449121 sshd[2280]: Connection closed by 68.220.241.50 port 50270 Jan 23 01:10:03.450342 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:03.464160 systemd[1]: sshd@1-172.31.21.166:22-68.220.241.50:50270.service: Deactivated successfully. Jan 23 01:10:03.467120 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:10:03.469834 systemd-logind[1956]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:10:03.471376 systemd-logind[1956]: Removed session 2. Jan 23 01:10:03.551631 systemd[1]: Started sshd@2-172.31.21.166:22-68.220.241.50:50274.service - OpenSSH per-connection server daemon (68.220.241.50:50274). Jan 23 01:10:04.092721 kubelet[2263]: E0123 01:10:04.092638 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:10:04.096014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:10:04.096203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:10:04.096657 systemd[1]: kubelet.service: Consumed 1.169s CPU time, 264.3M memory peak. Jan 23 01:10:04.112097 sshd[2286]: Accepted publickey for core from 68.220.241.50 port 50274 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:04.113798 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:04.120838 systemd-logind[1956]: New session 3 of user core. Jan 23 01:10:04.136681 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:10:04.488835 sshd[2291]: Connection closed by 68.220.241.50 port 50274 Jan 23 01:10:04.491205 sshd-session[2286]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:04.496153 systemd-logind[1956]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:10:04.497026 systemd[1]: sshd@2-172.31.21.166:22-68.220.241.50:50274.service: Deactivated successfully. Jan 23 01:10:04.499206 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:10:04.501107 systemd-logind[1956]: Removed session 3. Jan 23 01:10:04.574982 systemd[1]: Started sshd@3-172.31.21.166:22-68.220.241.50:50276.service - OpenSSH per-connection server daemon (68.220.241.50:50276). Jan 23 01:10:05.096608 sshd[2297]: Accepted publickey for core from 68.220.241.50 port 50276 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:05.098355 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:05.104888 systemd-logind[1956]: New session 4 of user core. Jan 23 01:10:05.111035 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:10:06.599956 systemd-resolved[1859]: Clock change detected. Flushing caches. Jan 23 01:10:06.800115 sshd[2300]: Connection closed by 68.220.241.50 port 50276 Jan 23 01:10:06.801599 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:06.806700 systemd-logind[1956]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:10:06.807450 systemd[1]: sshd@3-172.31.21.166:22-68.220.241.50:50276.service: Deactivated successfully. Jan 23 01:10:06.809665 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:10:06.811633 systemd-logind[1956]: Removed session 4. Jan 23 01:10:06.899928 systemd[1]: Started sshd@4-172.31.21.166:22-68.220.241.50:50288.service - OpenSSH per-connection server daemon (68.220.241.50:50288). Jan 23 01:10:07.397743 sshd[2306]: Accepted publickey for core from 68.220.241.50 port 50288 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:07.399856 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:07.407384 systemd-logind[1956]: New session 5 of user core. Jan 23 01:10:07.411605 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:10:07.697324 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:10:07.697838 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:07.715146 sudo[2310]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:07.791293 sshd[2309]: Connection closed by 68.220.241.50 port 50288 Jan 23 01:10:07.793378 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:07.797900 systemd[1]: sshd@4-172.31.21.166:22-68.220.241.50:50288.service: Deactivated successfully. Jan 23 01:10:07.800213 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:10:07.801918 systemd-logind[1956]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:10:07.804206 systemd-logind[1956]: Removed session 5. Jan 23 01:10:07.893948 systemd[1]: Started sshd@5-172.31.21.166:22-68.220.241.50:50290.service - OpenSSH per-connection server daemon (68.220.241.50:50290). Jan 23 01:10:08.441979 sshd[2316]: Accepted publickey for core from 68.220.241.50 port 50290 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:08.443543 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:08.449996 systemd-logind[1956]: New session 6 of user core. Jan 23 01:10:08.455561 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:10:08.739906 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:10:08.740590 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:08.746845 sudo[2321]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:08.752783 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:10:08.753171 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:08.764847 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:10:08.809347 augenrules[2343]: No rules Jan 23 01:10:08.810559 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:10:08.810795 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:10:08.813003 sudo[2320]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:08.896447 sshd[2319]: Connection closed by 68.220.241.50 port 50290 Jan 23 01:10:08.898399 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:08.902741 systemd-logind[1956]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:10:08.903528 systemd[1]: sshd@5-172.31.21.166:22-68.220.241.50:50290.service: Deactivated successfully. Jan 23 01:10:08.905376 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:10:08.907097 systemd-logind[1956]: Removed session 6. Jan 23 01:10:08.978844 systemd[1]: Started sshd@6-172.31.21.166:22-68.220.241.50:50300.service - OpenSSH per-connection server daemon (68.220.241.50:50300). Jan 23 01:10:09.483040 sshd[2352]: Accepted publickey for core from 68.220.241.50 port 50300 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:09.484652 sshd-session[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:09.490915 systemd-logind[1956]: New session 7 of user core. Jan 23 01:10:09.496705 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:10:09.757411 sudo[2356]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:10:09.757784 sudo[2356]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:10.332555 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:10:10.351837 (dockerd)[2377]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:10:10.856420 dockerd[2377]: time="2026-01-23T01:10:10.856355935Z" level=info msg="Starting up" Jan 23 01:10:10.857902 dockerd[2377]: time="2026-01-23T01:10:10.857788504Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:10:10.872744 dockerd[2377]: time="2026-01-23T01:10:10.872654399Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:10:10.926523 dockerd[2377]: time="2026-01-23T01:10:10.926454557Z" level=info msg="Loading containers: start." Jan 23 01:10:10.937404 kernel: Initializing XFRM netlink socket Jan 23 01:10:11.186836 (udev-worker)[2398]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:10:11.237772 systemd-networkd[1858]: docker0: Link UP Jan 23 01:10:11.242983 dockerd[2377]: time="2026-01-23T01:10:11.242927310Z" level=info msg="Loading containers: done." Jan 23 01:10:11.262964 dockerd[2377]: time="2026-01-23T01:10:11.262915636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:10:11.263133 dockerd[2377]: time="2026-01-23T01:10:11.263003687Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:10:11.263133 dockerd[2377]: time="2026-01-23T01:10:11.263082951Z" level=info msg="Initializing buildkit" Jan 23 01:10:11.288596 dockerd[2377]: time="2026-01-23T01:10:11.288549263Z" level=info msg="Completed buildkit initialization" Jan 23 01:10:11.296468 dockerd[2377]: time="2026-01-23T01:10:11.296418772Z" level=info msg="Daemon has completed initialization" Jan 23 01:10:11.297561 dockerd[2377]: time="2026-01-23T01:10:11.296617299Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:10:11.296660 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:10:12.445360 containerd[1975]: time="2026-01-23T01:10:12.444596704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 01:10:13.002520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233936575.mount: Deactivated successfully. Jan 23 01:10:14.543529 containerd[1975]: time="2026-01-23T01:10:14.543464591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:14.544637 containerd[1975]: time="2026-01-23T01:10:14.544498843Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 23 01:10:14.546628 containerd[1975]: time="2026-01-23T01:10:14.546585917Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:14.549366 containerd[1975]: time="2026-01-23T01:10:14.549261137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:14.551100 containerd[1975]: time="2026-01-23T01:10:14.550363569Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.105720085s" Jan 23 01:10:14.551100 containerd[1975]: time="2026-01-23T01:10:14.550416782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 01:10:14.551100 containerd[1975]: time="2026-01-23T01:10:14.550983228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 01:10:15.496797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:10:15.498656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:15.799498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:15.811813 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:10:15.898753 kubelet[2658]: E0123 01:10:15.898369 2658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:10:15.903892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:10:15.904071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:10:15.904797 systemd[1]: kubelet.service: Consumed 212ms CPU time, 109M memory peak. Jan 23 01:10:16.533395 containerd[1975]: time="2026-01-23T01:10:16.533342180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:16.540915 containerd[1975]: time="2026-01-23T01:10:16.540731115Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 23 01:10:16.547373 containerd[1975]: time="2026-01-23T01:10:16.547224870Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:16.552110 containerd[1975]: time="2026-01-23T01:10:16.552039757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:16.553349 containerd[1975]: time="2026-01-23T01:10:16.553189668Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.002175848s" Jan 23 01:10:16.553349 containerd[1975]: time="2026-01-23T01:10:16.553235362Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 01:10:16.553816 containerd[1975]: time="2026-01-23T01:10:16.553716367Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 01:10:18.032780 containerd[1975]: time="2026-01-23T01:10:18.032711859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:18.034008 containerd[1975]: time="2026-01-23T01:10:18.033803878Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 23 01:10:18.035052 containerd[1975]: time="2026-01-23T01:10:18.035000615Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:18.037756 containerd[1975]: time="2026-01-23T01:10:18.037722385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:18.038648 containerd[1975]: time="2026-01-23T01:10:18.038614074Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.484871292s" Jan 23 01:10:18.038648 containerd[1975]: time="2026-01-23T01:10:18.038649738Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 01:10:18.039101 containerd[1975]: time="2026-01-23T01:10:18.039080123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 01:10:19.169300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142481870.mount: Deactivated successfully. Jan 23 01:10:19.817703 containerd[1975]: time="2026-01-23T01:10:19.817652263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:19.818607 containerd[1975]: time="2026-01-23T01:10:19.818563093Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 23 01:10:19.821463 containerd[1975]: time="2026-01-23T01:10:19.821433934Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:19.825817 containerd[1975]: time="2026-01-23T01:10:19.824257212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:19.825817 containerd[1975]: time="2026-01-23T01:10:19.825367241Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.786259452s" Jan 23 01:10:19.825817 containerd[1975]: time="2026-01-23T01:10:19.825392213Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 01:10:19.826035 containerd[1975]: time="2026-01-23T01:10:19.825972718Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 01:10:20.317655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64525544.mount: Deactivated successfully. Jan 23 01:10:21.469455 containerd[1975]: time="2026-01-23T01:10:21.469287930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:21.470641 containerd[1975]: time="2026-01-23T01:10:21.470450752Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 23 01:10:21.472201 containerd[1975]: time="2026-01-23T01:10:21.472162464Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:21.475935 containerd[1975]: time="2026-01-23T01:10:21.475530644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:21.476639 containerd[1975]: time="2026-01-23T01:10:21.476611395Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.650608468s" Jan 23 01:10:21.476741 containerd[1975]: time="2026-01-23T01:10:21.476728235Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 01:10:21.477683 containerd[1975]: time="2026-01-23T01:10:21.477616777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:10:21.925521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227545903.mount: Deactivated successfully. Jan 23 01:10:21.931318 containerd[1975]: time="2026-01-23T01:10:21.931265573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:21.932394 containerd[1975]: time="2026-01-23T01:10:21.932236432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 01:10:21.933611 containerd[1975]: time="2026-01-23T01:10:21.933578332Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:21.937412 containerd[1975]: time="2026-01-23T01:10:21.936072187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:21.937412 containerd[1975]: time="2026-01-23T01:10:21.937050834Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 459.403289ms" Jan 23 01:10:21.937412 containerd[1975]: time="2026-01-23T01:10:21.937080197Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:10:21.937622 containerd[1975]: time="2026-01-23T01:10:21.937538047Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 01:10:22.481058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065021355.mount: Deactivated successfully. Jan 23 01:10:24.885398 containerd[1975]: time="2026-01-23T01:10:24.885314311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:24.889164 containerd[1975]: time="2026-01-23T01:10:24.888983043Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 23 01:10:24.892519 containerd[1975]: time="2026-01-23T01:10:24.892480444Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:24.898532 containerd[1975]: time="2026-01-23T01:10:24.898459449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:24.899612 containerd[1975]: time="2026-01-23T01:10:24.899442564Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.96187782s" Jan 23 01:10:24.899612 containerd[1975]: time="2026-01-23T01:10:24.899491477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 01:10:25.997722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:10:26.002611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:26.447542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:26.456770 (kubelet)[2816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:10:26.532203 kubelet[2816]: E0123 01:10:26.532148 2816 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:10:26.537282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:10:26.537660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:10:26.538175 systemd[1]: kubelet.service: Consumed 219ms CPU time, 110.8M memory peak. Jan 23 01:10:27.766144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:27.766403 systemd[1]: kubelet.service: Consumed 219ms CPU time, 110.8M memory peak. Jan 23 01:10:27.769119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:27.803291 systemd[1]: Reload requested from client PID 2830 ('systemctl') (unit session-7.scope)... Jan 23 01:10:27.803311 systemd[1]: Reloading... Jan 23 01:10:27.962881 zram_generator::config[2875]: No configuration found. Jan 23 01:10:28.246002 systemd[1]: Reloading finished in 442 ms. Jan 23 01:10:28.318380 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:10:28.318492 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:10:28.318841 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:28.318905 systemd[1]: kubelet.service: Consumed 146ms CPU time, 98.3M memory peak. Jan 23 01:10:28.321003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:28.630938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:28.642977 (kubelet)[2938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:10:28.705645 kubelet[2938]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:10:28.705645 kubelet[2938]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:10:28.705645 kubelet[2938]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:10:28.706114 kubelet[2938]: I0123 01:10:28.705746 2938 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:10:29.061488 kubelet[2938]: I0123 01:10:29.061435 2938 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:10:29.061488 kubelet[2938]: I0123 01:10:29.061471 2938 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:10:29.063368 kubelet[2938]: I0123 01:10:29.062505 2938 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:10:29.123758 kubelet[2938]: I0123 01:10:29.122876 2938 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:10:29.124010 kubelet[2938]: E0123 01:10:29.123952 2938 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.166:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.166:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:10:29.141853 kubelet[2938]: I0123 01:10:29.141818 2938 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:10:29.146760 kubelet[2938]: I0123 01:10:29.146725 2938 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:10:29.149441 kubelet[2938]: I0123 01:10:29.149360 2938 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:10:29.149595 kubelet[2938]: I0123 01:10:29.149424 2938 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-166","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:10:29.153075 kubelet[2938]: I0123 01:10:29.152989 2938 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:10:29.153075 kubelet[2938]: I0123 01:10:29.153043 2938 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:10:29.154759 kubelet[2938]: I0123 01:10:29.154681 2938 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:29.160693 kubelet[2938]: I0123 01:10:29.160651 2938 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:10:29.160693 kubelet[2938]: I0123 01:10:29.160704 2938 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:10:29.161152 kubelet[2938]: I0123 01:10:29.160736 2938 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:10:29.161152 kubelet[2938]: I0123 01:10:29.160750 2938 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:10:29.170590 kubelet[2938]: W0123 01:10:29.168495 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.166:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.166:6443: connect: connection refused Jan 23 01:10:29.170590 kubelet[2938]: E0123 01:10:29.170251 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.166:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.166:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:10:29.170590 kubelet[2938]: W0123 01:10:29.170388 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-166&limit=500&resourceVersion=0": dial tcp 172.31.21.166:6443: connect: connection refused Jan 23 01:10:29.170590 kubelet[2938]: E0123 01:10:29.170422 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-166&limit=500&resourceVersion=0\": dial tcp 172.31.21.166:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:10:29.172266 kubelet[2938]: I0123 01:10:29.172090 2938 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:10:29.177323 kubelet[2938]: I0123 01:10:29.177280 2938 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:10:29.177480 kubelet[2938]: W0123 01:10:29.177406 2938 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:10:29.181352 kubelet[2938]: I0123 01:10:29.180786 2938 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:10:29.181352 kubelet[2938]: I0123 01:10:29.181147 2938 server.go:1287] "Started kubelet" Jan 23 01:10:29.197686 kubelet[2938]: E0123 01:10:29.193761 2938 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.166:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.166:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-166.188d36ef199a3feb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-166,UID:ip-172-31-21-166,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-166,},FirstTimestamp:2026-01-23 01:10:29.181095915 +0000 UTC m=+0.532342582,LastTimestamp:2026-01-23 01:10:29.181095915 +0000 UTC m=+0.532342582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-166,}" Jan 23 01:10:29.197686 kubelet[2938]: I0123 01:10:29.197297 2938 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:10:29.198368 kubelet[2938]: I0123 01:10:29.198347 2938 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:10:29.201881 kubelet[2938]: I0123 01:10:29.201823 2938 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:10:29.203455 kubelet[2938]: I0123 01:10:29.202452 2938 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:10:29.203455 kubelet[2938]: I0123 01:10:29.202461 2938 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:10:29.203886 kubelet[2938]: I0123 01:10:29.203854 2938 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:10:29.205895 kubelet[2938]: I0123 01:10:29.205874 2938 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:10:29.207228 kubelet[2938]: I0123 01:10:29.207211 2938 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:10:29.207407 kubelet[2938]: I0123 01:10:29.207398 2938 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:10:29.208499 kubelet[2938]: W0123 01:10:29.208454 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.166:6443: connect: connection refused Jan 23 01:10:29.208570 kubelet[2938]: E0123 01:10:29.208505 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.166:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:10:29.208601 kubelet[2938]: E0123 01:10:29.208563 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-166?timeout=10s\": dial tcp 172.31.21.166:6443: connect: connection refused" interval="200ms" Jan 23 01:10:29.208719 kubelet[2938]: E0123 01:10:29.208686 2938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-166\" not found" Jan 23 01:10:29.211614 kubelet[2938]: I0123 01:10:29.211591 2938 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:10:29.211709 kubelet[2938]: I0123 01:10:29.211692 2938 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:10:29.217397 kubelet[2938]: E0123 01:10:29.217319 2938 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:10:29.217620 kubelet[2938]: I0123 01:10:29.217604 2938 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:10:29.230836 kubelet[2938]: I0123 01:10:29.230652 2938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:10:29.232354 kubelet[2938]: I0123 01:10:29.232263 2938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:10:29.232354 kubelet[2938]: I0123 01:10:29.232295 2938 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:10:29.232522 kubelet[2938]: I0123 01:10:29.232402 2938 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:10:29.232522 kubelet[2938]: I0123 01:10:29.232416 2938 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:10:29.232522 kubelet[2938]: E0123 01:10:29.232472 2938 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:10:29.243191 kubelet[2938]: W0123 01:10:29.243081 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.166:6443: connect: connection refused Jan 23 01:10:29.243553 kubelet[2938]: E0123 01:10:29.243389 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.166:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:10:29.250799 kubelet[2938]: I0123 01:10:29.250770 2938 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:10:29.251008 kubelet[2938]: I0123 01:10:29.250806 2938 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:10:29.251008 kubelet[2938]: I0123 01:10:29.250827 2938 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:29.254014 kubelet[2938]: I0123 01:10:29.253964 2938 policy_none.go:49] "None policy: Start" Jan 23 01:10:29.254014 kubelet[2938]: I0123 01:10:29.254003 2938 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:10:29.254014 kubelet[2938]: I0123 01:10:29.254021 2938 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:10:29.277369 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:10:29.300111 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:10:29.304477 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:10:29.309806 kubelet[2938]: E0123 01:10:29.309745 2938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-166\" not found" Jan 23 01:10:29.314073 kubelet[2938]: I0123 01:10:29.313955 2938 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:10:29.314582 kubelet[2938]: I0123 01:10:29.314554 2938 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:10:29.314685 kubelet[2938]: I0123 01:10:29.314577 2938 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:10:29.315202 kubelet[2938]: I0123 01:10:29.315186 2938 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:10:29.318393 kubelet[2938]: E0123 01:10:29.318368 2938 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:10:29.319300 kubelet[2938]: E0123 01:10:29.319282 2938 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-166\" not found" Jan 23 01:10:29.347671 systemd[1]: Created slice kubepods-burstable-podc40ba8a5a32217886e5b6de98982a53d.slice - libcontainer container kubepods-burstable-podc40ba8a5a32217886e5b6de98982a53d.slice. Jan 23 01:10:29.357286 kubelet[2938]: E0123 01:10:29.357231 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:29.360193 systemd[1]: Created slice kubepods-burstable-podf363bde80a488c2d25102c53414491e7.slice - libcontainer container kubepods-burstable-podf363bde80a488c2d25102c53414491e7.slice. Jan 23 01:10:29.380538 kubelet[2938]: E0123 01:10:29.380504 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:29.385193 systemd[1]: Created slice kubepods-burstable-podc06e3f782ff61ccf8b48f8eee237231a.slice - libcontainer container kubepods-burstable-podc06e3f782ff61ccf8b48f8eee237231a.slice. Jan 23 01:10:29.387700 kubelet[2938]: E0123 01:10:29.387672 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:29.409342 kubelet[2938]: E0123 01:10:29.409293 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-166?timeout=10s\": dial tcp 172.31.21.166:6443: connect: connection refused" interval="400ms" Jan 23 01:10:29.416467 kubelet[2938]: I0123 01:10:29.416432 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-166" Jan 23 01:10:29.417087 kubelet[2938]: E0123 01:10:29.417036 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.166:6443/api/v1/nodes\": dial tcp 172.31.21.166:6443: connect: connection refused" node="ip-172-31-21-166" Jan 23 01:10:29.510714 kubelet[2938]: I0123 01:10:29.510458 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c40ba8a5a32217886e5b6de98982a53d-ca-certs\") pod \"kube-apiserver-ip-172-31-21-166\" (UID: \"c40ba8a5a32217886e5b6de98982a53d\") " pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:29.510714 kubelet[2938]: I0123 01:10:29.510510 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c06e3f782ff61ccf8b48f8eee237231a-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-166\" (UID: \"c06e3f782ff61ccf8b48f8eee237231a\") " pod="kube-system/kube-scheduler-ip-172-31-21-166" Jan 23 01:10:29.510714 kubelet[2938]: I0123 01:10:29.510534 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c40ba8a5a32217886e5b6de98982a53d-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-166\" (UID: \"c40ba8a5a32217886e5b6de98982a53d\") " pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:29.510714 kubelet[2938]: I0123 01:10:29.510553 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c40ba8a5a32217886e5b6de98982a53d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-166\" (UID: \"c40ba8a5a32217886e5b6de98982a53d\") " pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:29.510714 kubelet[2938]: I0123 01:10:29.510569 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f363bde80a488c2d25102c53414491e7-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-166\" (UID: \"f363bde80a488c2d25102c53414491e7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:29.510964 kubelet[2938]: I0123 01:10:29.510584 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f363bde80a488c2d25102c53414491e7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-166\" (UID: \"f363bde80a488c2d25102c53414491e7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:29.510964 kubelet[2938]: I0123 01:10:29.510599 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f363bde80a488c2d25102c53414491e7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-166\" (UID: \"f363bde80a488c2d25102c53414491e7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:29.510964 kubelet[2938]: I0123 01:10:29.510615 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f363bde80a488c2d25102c53414491e7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-166\" (UID: \"f363bde80a488c2d25102c53414491e7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:29.510964 kubelet[2938]: I0123 01:10:29.510632 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f363bde80a488c2d25102c53414491e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-166\" (UID: \"f363bde80a488c2d25102c53414491e7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:29.619638 kubelet[2938]: I0123 01:10:29.618962 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-166" Jan 23 01:10:29.619638 kubelet[2938]: E0123 01:10:29.619355 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.166:6443/api/v1/nodes\": dial tcp 172.31.21.166:6443: connect: connection refused" node="ip-172-31-21-166" Jan 23 01:10:29.658932 containerd[1975]: time="2026-01-23T01:10:29.658821634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-166,Uid:c40ba8a5a32217886e5b6de98982a53d,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:29.682239 containerd[1975]: time="2026-01-23T01:10:29.682089470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-166,Uid:f363bde80a488c2d25102c53414491e7,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:29.689638 containerd[1975]: time="2026-01-23T01:10:29.689599552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-166,Uid:c06e3f782ff61ccf8b48f8eee237231a,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:29.805610 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 01:10:29.811902 kubelet[2938]: E0123 01:10:29.811749 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-166?timeout=10s\": dial tcp 172.31.21.166:6443: connect: connection refused" interval="800ms" Jan 23 01:10:29.822178 containerd[1975]: time="2026-01-23T01:10:29.822060915Z" level=info msg="connecting to shim d0c535f3775f166ed4d9a65811600721374e389953b21bcd17dcf1e46e585f37" address="unix:///run/containerd/s/37fff392f96666c264842641f1071a829acb748f5435c03e2499c30232b7625d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:29.832704 containerd[1975]: time="2026-01-23T01:10:29.832664403Z" level=info msg="connecting to shim df2a8b80f05d4d274e4864d3459bb238ec7ad75a902bb6ce31fd44f1b91c87d0" address="unix:///run/containerd/s/0cd1fa27a9d977dd05e068d1b9f6e83711d72428fc26916b5f983162c5998bb1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:29.836635 containerd[1975]: time="2026-01-23T01:10:29.836564744Z" level=info msg="connecting to shim 663b6edecd229286ab05de910b489d29d57d05b1b45ef3ec96c55180c2fa3308" address="unix:///run/containerd/s/b7e89d67b9ed61245d8de0877a510b5a1d8b1eafdcfa018e6cd12b1502d3b32b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:29.949806 systemd[1]: Started cri-containerd-663b6edecd229286ab05de910b489d29d57d05b1b45ef3ec96c55180c2fa3308.scope - libcontainer container 663b6edecd229286ab05de910b489d29d57d05b1b45ef3ec96c55180c2fa3308. Jan 23 01:10:29.960487 systemd[1]: Started cri-containerd-d0c535f3775f166ed4d9a65811600721374e389953b21bcd17dcf1e46e585f37.scope - libcontainer container d0c535f3775f166ed4d9a65811600721374e389953b21bcd17dcf1e46e585f37. Jan 23 01:10:29.965493 systemd[1]: Started cri-containerd-df2a8b80f05d4d274e4864d3459bb238ec7ad75a902bb6ce31fd44f1b91c87d0.scope - libcontainer container df2a8b80f05d4d274e4864d3459bb238ec7ad75a902bb6ce31fd44f1b91c87d0. Jan 23 01:10:30.023055 kubelet[2938]: I0123 01:10:30.022578 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-166" Jan 23 01:10:30.023055 kubelet[2938]: E0123 01:10:30.022987 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.166:6443/api/v1/nodes\": dial tcp 172.31.21.166:6443: connect: connection refused" node="ip-172-31-21-166" Jan 23 01:10:30.048849 kubelet[2938]: W0123 01:10:30.048672 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.166:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.166:6443: connect: connection refused Jan 23 01:10:30.048849 kubelet[2938]: E0123 01:10:30.048777 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.166:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.166:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:10:30.068581 containerd[1975]: time="2026-01-23T01:10:30.067735352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-166,Uid:c40ba8a5a32217886e5b6de98982a53d,Namespace:kube-system,Attempt:0,} returns sandbox id \"df2a8b80f05d4d274e4864d3459bb238ec7ad75a902bb6ce31fd44f1b91c87d0\"" Jan 23 01:10:30.078577 containerd[1975]: time="2026-01-23T01:10:30.078508370Z" level=info msg="CreateContainer within sandbox \"df2a8b80f05d4d274e4864d3459bb238ec7ad75a902bb6ce31fd44f1b91c87d0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:10:30.100685 containerd[1975]: time="2026-01-23T01:10:30.100583056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-166,Uid:f363bde80a488c2d25102c53414491e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"663b6edecd229286ab05de910b489d29d57d05b1b45ef3ec96c55180c2fa3308\"" Jan 23 01:10:30.104391 containerd[1975]: time="2026-01-23T01:10:30.104351781Z" level=info msg="CreateContainer within sandbox \"663b6edecd229286ab05de910b489d29d57d05b1b45ef3ec96c55180c2fa3308\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:10:30.107914 containerd[1975]: time="2026-01-23T01:10:30.107849652Z" level=info msg="Container f878dcecae9118e54f55d4d030634856cbd8c20f3aaf66634ea79ee7659d76e2: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:30.123356 containerd[1975]: time="2026-01-23T01:10:30.122634005Z" level=info msg="Container 0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:30.131285 containerd[1975]: time="2026-01-23T01:10:30.131235876Z" level=info msg="CreateContainer within sandbox \"df2a8b80f05d4d274e4864d3459bb238ec7ad75a902bb6ce31fd44f1b91c87d0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f878dcecae9118e54f55d4d030634856cbd8c20f3aaf66634ea79ee7659d76e2\"" Jan 23 01:10:30.136206 containerd[1975]: time="2026-01-23T01:10:30.136155200Z" level=info msg="StartContainer for \"f878dcecae9118e54f55d4d030634856cbd8c20f3aaf66634ea79ee7659d76e2\"" Jan 23 01:10:30.139228 containerd[1975]: time="2026-01-23T01:10:30.137482838Z" level=info msg="CreateContainer within sandbox \"663b6edecd229286ab05de910b489d29d57d05b1b45ef3ec96c55180c2fa3308\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66\"" Jan 23 01:10:30.142409 containerd[1975]: time="2026-01-23T01:10:30.142369259Z" level=info msg="StartContainer for \"0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66\"" Jan 23 01:10:30.144665 containerd[1975]: time="2026-01-23T01:10:30.144623547Z" level=info msg="connecting to shim 0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66" address="unix:///run/containerd/s/b7e89d67b9ed61245d8de0877a510b5a1d8b1eafdcfa018e6cd12b1502d3b32b" protocol=ttrpc version=3 Jan 23 01:10:30.147172 containerd[1975]: time="2026-01-23T01:10:30.145803192Z" level=info msg="connecting to shim f878dcecae9118e54f55d4d030634856cbd8c20f3aaf66634ea79ee7659d76e2" address="unix:///run/containerd/s/0cd1fa27a9d977dd05e068d1b9f6e83711d72428fc26916b5f983162c5998bb1" protocol=ttrpc version=3 Jan 23 01:10:30.155697 containerd[1975]: time="2026-01-23T01:10:30.155293027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-166,Uid:c06e3f782ff61ccf8b48f8eee237231a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0c535f3775f166ed4d9a65811600721374e389953b21bcd17dcf1e46e585f37\"" Jan 23 01:10:30.160760 containerd[1975]: time="2026-01-23T01:10:30.160719282Z" level=info msg="CreateContainer within sandbox \"d0c535f3775f166ed4d9a65811600721374e389953b21bcd17dcf1e46e585f37\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:10:30.175543 containerd[1975]: time="2026-01-23T01:10:30.175500859Z" level=info msg="Container 473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:30.185354 containerd[1975]: time="2026-01-23T01:10:30.185216593Z" level=info msg="CreateContainer within sandbox \"d0c535f3775f166ed4d9a65811600721374e389953b21bcd17dcf1e46e585f37\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578\"" Jan 23 01:10:30.186373 containerd[1975]: time="2026-01-23T01:10:30.185792991Z" level=info msg="StartContainer for \"473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578\"" Jan 23 01:10:30.186776 systemd[1]: Started cri-containerd-0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66.scope - libcontainer container 0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66. Jan 23 01:10:30.187029 containerd[1975]: time="2026-01-23T01:10:30.187008397Z" level=info msg="connecting to shim 473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578" address="unix:///run/containerd/s/37fff392f96666c264842641f1071a829acb748f5435c03e2499c30232b7625d" protocol=ttrpc version=3 Jan 23 01:10:30.194919 systemd[1]: Started cri-containerd-f878dcecae9118e54f55d4d030634856cbd8c20f3aaf66634ea79ee7659d76e2.scope - libcontainer container f878dcecae9118e54f55d4d030634856cbd8c20f3aaf66634ea79ee7659d76e2. Jan 23 01:10:30.226306 systemd[1]: Started cri-containerd-473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578.scope - libcontainer container 473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578. Jan 23 01:10:30.340051 containerd[1975]: time="2026-01-23T01:10:30.339964525Z" level=info msg="StartContainer for \"0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66\" returns successfully" Jan 23 01:10:30.369349 containerd[1975]: time="2026-01-23T01:10:30.369291561Z" level=info msg="StartContainer for \"f878dcecae9118e54f55d4d030634856cbd8c20f3aaf66634ea79ee7659d76e2\" returns successfully" Jan 23 01:10:30.372698 containerd[1975]: time="2026-01-23T01:10:30.372620995Z" level=info msg="StartContainer for \"473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578\" returns successfully" Jan 23 01:10:30.580853 kubelet[2938]: W0123 01:10:30.580768 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.166:6443: connect: connection refused Jan 23 01:10:30.581020 kubelet[2938]: E0123 01:10:30.580867 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.166:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:10:30.608714 kubelet[2938]: W0123 01:10:30.608637 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-166&limit=500&resourceVersion=0": dial tcp 172.31.21.166:6443: connect: connection refused Jan 23 01:10:30.608875 kubelet[2938]: E0123 01:10:30.608725 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-166&limit=500&resourceVersion=0\": dial tcp 172.31.21.166:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:10:30.612762 kubelet[2938]: E0123 01:10:30.612717 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-166?timeout=10s\": dial tcp 172.31.21.166:6443: connect: connection refused" interval="1.6s" Jan 23 01:10:30.715963 kubelet[2938]: W0123 01:10:30.715893 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.166:6443: connect: connection refused Jan 23 01:10:30.716106 kubelet[2938]: E0123 01:10:30.715976 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.166:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:10:30.826655 kubelet[2938]: I0123 01:10:30.826626 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-166" Jan 23 01:10:30.827192 kubelet[2938]: E0123 01:10:30.827162 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.166:6443/api/v1/nodes\": dial tcp 172.31.21.166:6443: connect: connection refused" node="ip-172-31-21-166" Jan 23 01:10:31.171201 kubelet[2938]: E0123 01:10:31.171154 2938 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.166:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.166:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:10:31.277918 kubelet[2938]: E0123 01:10:31.277666 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:31.285099 kubelet[2938]: E0123 01:10:31.285069 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:31.285602 kubelet[2938]: E0123 01:10:31.285572 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:32.287651 kubelet[2938]: E0123 01:10:32.287624 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:32.290880 kubelet[2938]: E0123 01:10:32.290858 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:32.291383 kubelet[2938]: E0123 01:10:32.291363 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:32.429951 kubelet[2938]: I0123 01:10:32.429927 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-166" Jan 23 01:10:33.289103 kubelet[2938]: E0123 01:10:33.289070 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:33.290363 kubelet[2938]: E0123 01:10:33.289690 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:33.290363 kubelet[2938]: E0123 01:10:33.290018 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-166\" not found" node="ip-172-31-21-166" Jan 23 01:10:33.674441 kubelet[2938]: I0123 01:10:33.672235 2938 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-166" Jan 23 01:10:33.708255 kubelet[2938]: I0123 01:10:33.708091 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:33.716424 kubelet[2938]: E0123 01:10:33.716384 2938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-166\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:33.716424 kubelet[2938]: I0123 01:10:33.716417 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:33.718639 kubelet[2938]: E0123 01:10:33.718610 2938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-166\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:33.718639 kubelet[2938]: I0123 01:10:33.718642 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-166" Jan 23 01:10:33.720349 kubelet[2938]: E0123 01:10:33.720291 2938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-166\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-166" Jan 23 01:10:34.168087 kubelet[2938]: I0123 01:10:34.167781 2938 apiserver.go:52] "Watching apiserver" Jan 23 01:10:34.208196 kubelet[2938]: I0123 01:10:34.207931 2938 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:10:36.105312 systemd[1]: Reload requested from client PID 3210 ('systemctl') (unit session-7.scope)... Jan 23 01:10:36.105723 systemd[1]: Reloading... Jan 23 01:10:36.267459 zram_generator::config[3256]: No configuration found. Jan 23 01:10:36.620291 systemd[1]: Reloading finished in 514 ms. Jan 23 01:10:36.653756 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:36.667989 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:10:36.668266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:36.668325 systemd[1]: kubelet.service: Consumed 977ms CPU time, 128.8M memory peak. Jan 23 01:10:36.673723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:37.138644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:37.152792 (kubelet)[3314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:10:37.259669 kubelet[3314]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:10:37.259669 kubelet[3314]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:10:37.259669 kubelet[3314]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:10:37.260615 kubelet[3314]: I0123 01:10:37.260258 3314 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:10:37.272394 kubelet[3314]: I0123 01:10:37.272295 3314 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:10:37.273068 kubelet[3314]: I0123 01:10:37.272878 3314 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:10:37.274351 kubelet[3314]: I0123 01:10:37.274091 3314 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:10:37.282885 sudo[3325]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 01:10:37.283299 sudo[3325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 01:10:37.288243 kubelet[3314]: I0123 01:10:37.288128 3314 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 01:10:37.292687 kubelet[3314]: I0123 01:10:37.292641 3314 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:10:37.297935 kubelet[3314]: I0123 01:10:37.297875 3314 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:10:37.301738 kubelet[3314]: I0123 01:10:37.301535 3314 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:10:37.302993 kubelet[3314]: I0123 01:10:37.302176 3314 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:10:37.302993 kubelet[3314]: I0123 01:10:37.302218 3314 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-166","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:10:37.302993 kubelet[3314]: I0123 01:10:37.302488 3314 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:10:37.302993 kubelet[3314]: I0123 01:10:37.302505 3314 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:10:37.303309 kubelet[3314]: I0123 01:10:37.302569 3314 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:37.303309 kubelet[3314]: I0123 01:10:37.302770 3314 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:10:37.303482 kubelet[3314]: I0123 01:10:37.302797 3314 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:10:37.303560 kubelet[3314]: I0123 01:10:37.303511 3314 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:10:37.303560 kubelet[3314]: I0123 01:10:37.303533 3314 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:10:37.305893 kubelet[3314]: I0123 01:10:37.305860 3314 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:10:37.306509 kubelet[3314]: I0123 01:10:37.306487 3314 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:10:37.307039 kubelet[3314]: I0123 01:10:37.307016 3314 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:10:37.307110 kubelet[3314]: I0123 01:10:37.307060 3314 server.go:1287] "Started kubelet" Jan 23 01:10:37.316508 kubelet[3314]: I0123 01:10:37.316298 3314 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:10:37.328710 kubelet[3314]: I0123 01:10:37.325808 3314 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:10:37.328710 kubelet[3314]: I0123 01:10:37.328445 3314 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:10:37.332248 kubelet[3314]: I0123 01:10:37.331929 3314 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:10:37.339752 kubelet[3314]: I0123 01:10:37.339068 3314 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:10:37.347854 kubelet[3314]: I0123 01:10:37.347721 3314 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:10:37.348496 kubelet[3314]: I0123 01:10:37.348472 3314 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:10:37.349118 kubelet[3314]: E0123 01:10:37.348959 3314 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-166\" not found" Jan 23 01:10:37.350445 kubelet[3314]: I0123 01:10:37.350402 3314 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:10:37.350819 kubelet[3314]: I0123 01:10:37.350555 3314 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:10:37.388003 kubelet[3314]: I0123 01:10:37.387957 3314 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:10:37.388475 kubelet[3314]: I0123 01:10:37.388093 3314 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:10:37.398800 kubelet[3314]: I0123 01:10:37.397455 3314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:10:37.406769 kubelet[3314]: I0123 01:10:37.406715 3314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:10:37.406769 kubelet[3314]: I0123 01:10:37.406759 3314 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:10:37.406957 kubelet[3314]: I0123 01:10:37.406790 3314 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:10:37.406957 kubelet[3314]: I0123 01:10:37.406798 3314 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:10:37.406957 kubelet[3314]: E0123 01:10:37.406861 3314 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:10:37.421040 kubelet[3314]: I0123 01:10:37.421006 3314 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:10:37.449311 kubelet[3314]: E0123 01:10:37.448867 3314 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:10:37.507502 kubelet[3314]: E0123 01:10:37.507469 3314 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 01:10:37.538120 kubelet[3314]: I0123 01:10:37.538089 3314 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:10:37.538120 kubelet[3314]: I0123 01:10:37.538115 3314 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:10:37.538309 kubelet[3314]: I0123 01:10:37.538137 3314 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:37.538369 kubelet[3314]: I0123 01:10:37.538358 3314 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:10:37.538408 kubelet[3314]: I0123 01:10:37.538372 3314 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:10:37.538408 kubelet[3314]: I0123 01:10:37.538400 3314 policy_none.go:49] "None policy: Start" Jan 23 01:10:37.538498 kubelet[3314]: I0123 01:10:37.538414 3314 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:10:37.538498 kubelet[3314]: I0123 01:10:37.538427 3314 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:10:37.539529 kubelet[3314]: I0123 01:10:37.538574 3314 state_mem.go:75] "Updated machine memory state" Jan 23 01:10:37.546104 kubelet[3314]: I0123 01:10:37.546068 3314 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:10:37.546739 kubelet[3314]: I0123 01:10:37.546287 3314 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:10:37.546739 kubelet[3314]: I0123 01:10:37.546304 3314 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:10:37.546889 kubelet[3314]: I0123 01:10:37.546821 3314 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:10:37.556892 kubelet[3314]: E0123 01:10:37.556681 3314 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:10:37.675420 kubelet[3314]: I0123 01:10:37.675028 3314 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-166" Jan 23 01:10:37.684705 kubelet[3314]: I0123 01:10:37.684455 3314 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-166" Jan 23 01:10:37.684705 kubelet[3314]: I0123 01:10:37.684532 3314 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-166" Jan 23 01:10:37.710355 kubelet[3314]: I0123 01:10:37.709680 3314 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:37.710355 kubelet[3314]: I0123 01:10:37.710155 3314 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-166" Jan 23 01:10:37.712282 kubelet[3314]: I0123 01:10:37.712228 3314 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:37.852393 kubelet[3314]: I0123 01:10:37.852200 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c40ba8a5a32217886e5b6de98982a53d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-166\" (UID: \"c40ba8a5a32217886e5b6de98982a53d\") " pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:37.852393 kubelet[3314]: I0123 01:10:37.852247 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f363bde80a488c2d25102c53414491e7-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-166\" (UID: \"f363bde80a488c2d25102c53414491e7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:37.852393 kubelet[3314]: I0123 01:10:37.852267 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f363bde80a488c2d25102c53414491e7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-166\" (UID: \"f363bde80a488c2d25102c53414491e7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:37.852393 kubelet[3314]: I0123 01:10:37.852283 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c40ba8a5a32217886e5b6de98982a53d-ca-certs\") pod \"kube-apiserver-ip-172-31-21-166\" (UID: \"c40ba8a5a32217886e5b6de98982a53d\") " pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:37.852393 kubelet[3314]: I0123 01:10:37.852299 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c40ba8a5a32217886e5b6de98982a53d-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-166\" (UID: \"c40ba8a5a32217886e5b6de98982a53d\") " pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:37.853078 kubelet[3314]: I0123 01:10:37.852313 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f363bde80a488c2d25102c53414491e7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-166\" (UID: \"f363bde80a488c2d25102c53414491e7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:37.853286 kubelet[3314]: I0123 01:10:37.853029 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f363bde80a488c2d25102c53414491e7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-166\" (UID: \"f363bde80a488c2d25102c53414491e7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:37.853425 kubelet[3314]: I0123 01:10:37.853373 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f363bde80a488c2d25102c53414491e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-166\" (UID: \"f363bde80a488c2d25102c53414491e7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-166" Jan 23 01:10:37.853425 kubelet[3314]: I0123 01:10:37.853401 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c06e3f782ff61ccf8b48f8eee237231a-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-166\" (UID: \"c06e3f782ff61ccf8b48f8eee237231a\") " pod="kube-system/kube-scheduler-ip-172-31-21-166" Jan 23 01:10:37.974820 sudo[3325]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:38.330703 kubelet[3314]: I0123 01:10:38.330660 3314 apiserver.go:52] "Watching apiserver" Jan 23 01:10:38.351167 kubelet[3314]: I0123 01:10:38.351117 3314 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:10:38.487160 kubelet[3314]: I0123 01:10:38.487129 3314 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:38.493132 kubelet[3314]: E0123 01:10:38.492698 3314 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-166\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-166" Jan 23 01:10:38.518519 kubelet[3314]: I0123 01:10:38.518464 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-166" podStartSLOduration=1.5184425030000002 podStartE2EDuration="1.518442503s" podCreationTimestamp="2026-01-23 01:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:10:38.509863807 +0000 UTC m=+1.347424061" watchObservedRunningTime="2026-01-23 01:10:38.518442503 +0000 UTC m=+1.356002751" Jan 23 01:10:38.530568 kubelet[3314]: I0123 01:10:38.530513 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-166" podStartSLOduration=1.530497323 podStartE2EDuration="1.530497323s" podCreationTimestamp="2026-01-23 01:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:10:38.530039542 +0000 UTC m=+1.367599799" watchObservedRunningTime="2026-01-23 01:10:38.530497323 +0000 UTC m=+1.368057570" Jan 23 01:10:38.530740 kubelet[3314]: I0123 01:10:38.530635 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-166" podStartSLOduration=1.530629156 podStartE2EDuration="1.530629156s" podCreationTimestamp="2026-01-23 01:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:10:38.519735633 +0000 UTC m=+1.357295901" watchObservedRunningTime="2026-01-23 01:10:38.530629156 +0000 UTC m=+1.368189411" Jan 23 01:10:39.778255 sudo[2356]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:39.854454 sshd[2355]: Connection closed by 68.220.241.50 port 50300 Jan 23 01:10:39.855465 sshd-session[2352]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:39.859410 systemd[1]: sshd@6-172.31.21.166:22-68.220.241.50:50300.service: Deactivated successfully. Jan 23 01:10:39.861844 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:10:39.862037 systemd[1]: session-7.scope: Consumed 4.563s CPU time, 206.5M memory peak. Jan 23 01:10:39.863735 systemd-logind[1956]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:10:39.865967 systemd-logind[1956]: Removed session 7. Jan 23 01:10:40.821132 kubelet[3314]: I0123 01:10:40.821048 3314 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:10:40.821965 containerd[1975]: time="2026-01-23T01:10:40.821928746Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:10:40.822309 kubelet[3314]: I0123 01:10:40.822161 3314 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:10:41.656604 systemd[1]: Created slice kubepods-besteffort-pode70356c1_07e3_4704_8d55_2a0cf4808b58.slice - libcontainer container kubepods-besteffort-pode70356c1_07e3_4704_8d55_2a0cf4808b58.slice. Jan 23 01:10:41.678880 kubelet[3314]: I0123 01:10:41.678836 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e70356c1-07e3-4704-8d55-2a0cf4808b58-kube-proxy\") pod \"kube-proxy-jnmkf\" (UID: \"e70356c1-07e3-4704-8d55-2a0cf4808b58\") " pod="kube-system/kube-proxy-jnmkf" Jan 23 01:10:41.678880 kubelet[3314]: I0123 01:10:41.678890 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e70356c1-07e3-4704-8d55-2a0cf4808b58-xtables-lock\") pod \"kube-proxy-jnmkf\" (UID: \"e70356c1-07e3-4704-8d55-2a0cf4808b58\") " pod="kube-system/kube-proxy-jnmkf" Jan 23 01:10:41.679090 kubelet[3314]: I0123 01:10:41.678919 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qrcx\" (UniqueName: \"kubernetes.io/projected/e70356c1-07e3-4704-8d55-2a0cf4808b58-kube-api-access-9qrcx\") pod \"kube-proxy-jnmkf\" (UID: \"e70356c1-07e3-4704-8d55-2a0cf4808b58\") " pod="kube-system/kube-proxy-jnmkf" Jan 23 01:10:41.679090 kubelet[3314]: I0123 01:10:41.678941 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e70356c1-07e3-4704-8d55-2a0cf4808b58-lib-modules\") pod \"kube-proxy-jnmkf\" (UID: \"e70356c1-07e3-4704-8d55-2a0cf4808b58\") " pod="kube-system/kube-proxy-jnmkf" Jan 23 01:10:41.694033 systemd[1]: Created slice kubepods-burstable-pod55d93638_13b7_406a_8971_2c9c72e13447.slice - libcontainer container kubepods-burstable-pod55d93638_13b7_406a_8971_2c9c72e13447.slice. Jan 23 01:10:41.779834 kubelet[3314]: I0123 01:10:41.779794 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-host-proc-sys-kernel\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.780042 kubelet[3314]: I0123 01:10:41.780029 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-bpf-maps\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.780129 kubelet[3314]: I0123 01:10:41.780118 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55d93638-13b7-406a-8971-2c9c72e13447-clustermesh-secrets\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.780259 kubelet[3314]: I0123 01:10:41.780229 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cilium-run\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.780630 kubelet[3314]: I0123 01:10:41.780560 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55d93638-13b7-406a-8971-2c9c72e13447-hubble-tls\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.780730 kubelet[3314]: I0123 01:10:41.780638 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-lib-modules\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.780730 kubelet[3314]: I0123 01:10:41.780683 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55d93638-13b7-406a-8971-2c9c72e13447-cilium-config-path\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.780939 kubelet[3314]: I0123 01:10:41.780755 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cilium-cgroup\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.780939 kubelet[3314]: I0123 01:10:41.780783 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-xtables-lock\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.781037 kubelet[3314]: I0123 01:10:41.780817 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-etc-cni-netd\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.781170 kubelet[3314]: I0123 01:10:41.781053 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qccsr\" (UniqueName: \"kubernetes.io/projected/55d93638-13b7-406a-8971-2c9c72e13447-kube-api-access-qccsr\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.781238 kubelet[3314]: I0123 01:10:41.781212 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-hostproc\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.781289 kubelet[3314]: I0123 01:10:41.781254 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cni-path\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.781397 kubelet[3314]: I0123 01:10:41.781289 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-host-proc-sys-net\") pod \"cilium-89dp9\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " pod="kube-system/cilium-89dp9" Jan 23 01:10:41.840706 kubelet[3314]: I0123 01:10:41.840589 3314 status_manager.go:890] "Failed to get status for pod" podUID="346b77ab-3aca-42c1-b651-a5ea5e392a72" pod="kube-system/cilium-operator-6c4d7847fc-9jsvr" err="pods \"cilium-operator-6c4d7847fc-9jsvr\" is forbidden: User \"system:node:ip-172-31-21-166\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-166' and this object" Jan 23 01:10:41.850380 systemd[1]: Created slice kubepods-besteffort-pod346b77ab_3aca_42c1_b651_a5ea5e392a72.slice - libcontainer container kubepods-besteffort-pod346b77ab_3aca_42c1_b651_a5ea5e392a72.slice. Jan 23 01:10:41.884446 kubelet[3314]: I0123 01:10:41.882432 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkjj6\" (UniqueName: \"kubernetes.io/projected/346b77ab-3aca-42c1-b651-a5ea5e392a72-kube-api-access-fkjj6\") pod \"cilium-operator-6c4d7847fc-9jsvr\" (UID: \"346b77ab-3aca-42c1-b651-a5ea5e392a72\") " pod="kube-system/cilium-operator-6c4d7847fc-9jsvr" Jan 23 01:10:41.884446 kubelet[3314]: I0123 01:10:41.882486 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/346b77ab-3aca-42c1-b651-a5ea5e392a72-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9jsvr\" (UID: \"346b77ab-3aca-42c1-b651-a5ea5e392a72\") " pod="kube-system/cilium-operator-6c4d7847fc-9jsvr" Jan 23 01:10:41.969709 containerd[1975]: time="2026-01-23T01:10:41.969587486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jnmkf,Uid:e70356c1-07e3-4704-8d55-2a0cf4808b58,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:41.993855 containerd[1975]: time="2026-01-23T01:10:41.993527326Z" level=info msg="connecting to shim f228a7b4bf374e8bfcf63c7a510da60e0a79a7985c23ea586ca966a1e572890a" address="unix:///run/containerd/s/f456f2053b0ba09ef48d7f2a7250021bd8f5058cf68be7c5aa76f693b08999a2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:41.999590 containerd[1975]: time="2026-01-23T01:10:41.999547621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-89dp9,Uid:55d93638-13b7-406a-8971-2c9c72e13447,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:42.037116 containerd[1975]: time="2026-01-23T01:10:42.037066863Z" level=info msg="connecting to shim 454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238" address="unix:///run/containerd/s/c6baadcab3aa752b4009e867f364df39bf592cb63b9819c06cb0b0be9a320341" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:42.038988 systemd[1]: Started cri-containerd-f228a7b4bf374e8bfcf63c7a510da60e0a79a7985c23ea586ca966a1e572890a.scope - libcontainer container f228a7b4bf374e8bfcf63c7a510da60e0a79a7985c23ea586ca966a1e572890a. Jan 23 01:10:42.075594 systemd[1]: Started cri-containerd-454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238.scope - libcontainer container 454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238. Jan 23 01:10:42.135653 containerd[1975]: time="2026-01-23T01:10:42.135519926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jnmkf,Uid:e70356c1-07e3-4704-8d55-2a0cf4808b58,Namespace:kube-system,Attempt:0,} returns sandbox id \"f228a7b4bf374e8bfcf63c7a510da60e0a79a7985c23ea586ca966a1e572890a\"" Jan 23 01:10:42.142358 containerd[1975]: time="2026-01-23T01:10:42.141658613Z" level=info msg="CreateContainer within sandbox \"f228a7b4bf374e8bfcf63c7a510da60e0a79a7985c23ea586ca966a1e572890a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:10:42.157772 containerd[1975]: time="2026-01-23T01:10:42.157528707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9jsvr,Uid:346b77ab-3aca-42c1-b651-a5ea5e392a72,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:42.158914 containerd[1975]: time="2026-01-23T01:10:42.158874754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-89dp9,Uid:55d93638-13b7-406a-8971-2c9c72e13447,Namespace:kube-system,Attempt:0,} returns sandbox id \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\"" Jan 23 01:10:42.163068 containerd[1975]: time="2026-01-23T01:10:42.163001595Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 01:10:42.191872 containerd[1975]: time="2026-01-23T01:10:42.191832640Z" level=info msg="Container d6aa6e6ce1b4b1d2bcc8aebad8be6c0b07732baa61c1ae7fc49bc3f52371e44b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:42.202705 containerd[1975]: time="2026-01-23T01:10:42.202637234Z" level=info msg="CreateContainer within sandbox \"f228a7b4bf374e8bfcf63c7a510da60e0a79a7985c23ea586ca966a1e572890a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d6aa6e6ce1b4b1d2bcc8aebad8be6c0b07732baa61c1ae7fc49bc3f52371e44b\"" Jan 23 01:10:42.205838 containerd[1975]: time="2026-01-23T01:10:42.205586801Z" level=info msg="StartContainer for \"d6aa6e6ce1b4b1d2bcc8aebad8be6c0b07732baa61c1ae7fc49bc3f52371e44b\"" Jan 23 01:10:42.209824 containerd[1975]: time="2026-01-23T01:10:42.209772340Z" level=info msg="connecting to shim d6aa6e6ce1b4b1d2bcc8aebad8be6c0b07732baa61c1ae7fc49bc3f52371e44b" address="unix:///run/containerd/s/f456f2053b0ba09ef48d7f2a7250021bd8f5058cf68be7c5aa76f693b08999a2" protocol=ttrpc version=3 Jan 23 01:10:42.214283 containerd[1975]: time="2026-01-23T01:10:42.213555796Z" level=info msg="connecting to shim 17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3" address="unix:///run/containerd/s/5f933e55f5544016a33c9350face3493be080967c61531c2c507af6141ba9deb" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:42.239628 systemd[1]: Started cri-containerd-d6aa6e6ce1b4b1d2bcc8aebad8be6c0b07732baa61c1ae7fc49bc3f52371e44b.scope - libcontainer container d6aa6e6ce1b4b1d2bcc8aebad8be6c0b07732baa61c1ae7fc49bc3f52371e44b. Jan 23 01:10:42.256564 systemd[1]: Started cri-containerd-17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3.scope - libcontainer container 17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3. Jan 23 01:10:42.338888 containerd[1975]: time="2026-01-23T01:10:42.338842524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9jsvr,Uid:346b77ab-3aca-42c1-b651-a5ea5e392a72,Namespace:kube-system,Attempt:0,} returns sandbox id \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\"" Jan 23 01:10:42.342167 containerd[1975]: time="2026-01-23T01:10:42.341712438Z" level=info msg="StartContainer for \"d6aa6e6ce1b4b1d2bcc8aebad8be6c0b07732baa61c1ae7fc49bc3f52371e44b\" returns successfully" Jan 23 01:10:42.523320 kubelet[3314]: I0123 01:10:42.523266 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jnmkf" podStartSLOduration=1.523248706 podStartE2EDuration="1.523248706s" podCreationTimestamp="2026-01-23 01:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:10:42.512531235 +0000 UTC m=+5.350091498" watchObservedRunningTime="2026-01-23 01:10:42.523248706 +0000 UTC m=+5.360808940" Jan 23 01:10:44.260356 update_engine[1959]: I20260123 01:10:44.260217 1959 update_attempter.cc:509] Updating boot flags... Jan 23 01:10:47.656674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873023281.mount: Deactivated successfully. Jan 23 01:10:50.248622 containerd[1975]: time="2026-01-23T01:10:50.248550879Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:50.251478 containerd[1975]: time="2026-01-23T01:10:50.251417671Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 01:10:50.265353 containerd[1975]: time="2026-01-23T01:10:50.263993503Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:50.265612 containerd[1975]: time="2026-01-23T01:10:50.265584626Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.102034801s" Jan 23 01:10:50.265736 containerd[1975]: time="2026-01-23T01:10:50.265722420Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 01:10:50.268606 containerd[1975]: time="2026-01-23T01:10:50.268578676Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 01:10:50.270522 containerd[1975]: time="2026-01-23T01:10:50.270495249Z" level=info msg="CreateContainer within sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:10:50.315057 containerd[1975]: time="2026-01-23T01:10:50.315021220Z" level=info msg="Container c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:50.320297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041406022.mount: Deactivated successfully. Jan 23 01:10:50.333694 containerd[1975]: time="2026-01-23T01:10:50.333654720Z" level=info msg="CreateContainer within sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\"" Jan 23 01:10:50.334507 containerd[1975]: time="2026-01-23T01:10:50.334436336Z" level=info msg="StartContainer for \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\"" Jan 23 01:10:50.335758 containerd[1975]: time="2026-01-23T01:10:50.335727890Z" level=info msg="connecting to shim c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0" address="unix:///run/containerd/s/c6baadcab3aa752b4009e867f364df39bf592cb63b9819c06cb0b0be9a320341" protocol=ttrpc version=3 Jan 23 01:10:50.419610 systemd[1]: Started cri-containerd-c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0.scope - libcontainer container c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0. Jan 23 01:10:50.469185 containerd[1975]: time="2026-01-23T01:10:50.469063786Z" level=info msg="StartContainer for \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\" returns successfully" Jan 23 01:10:50.485870 systemd[1]: cri-containerd-c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0.scope: Deactivated successfully. Jan 23 01:10:50.532086 containerd[1975]: time="2026-01-23T01:10:50.530450320Z" level=info msg="received container exit event container_id:\"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\" id:\"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\" pid:3911 exited_at:{seconds:1769130650 nanos:492429033}" Jan 23 01:10:50.593301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0-rootfs.mount: Deactivated successfully. Jan 23 01:10:51.543343 containerd[1975]: time="2026-01-23T01:10:51.543251138Z" level=info msg="CreateContainer within sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:10:51.561102 containerd[1975]: time="2026-01-23T01:10:51.559892666Z" level=info msg="Container 5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:51.568629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583639392.mount: Deactivated successfully. Jan 23 01:10:51.587555 containerd[1975]: time="2026-01-23T01:10:51.587089856Z" level=info msg="CreateContainer within sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\"" Jan 23 01:10:51.591115 containerd[1975]: time="2026-01-23T01:10:51.591076845Z" level=info msg="StartContainer for \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\"" Jan 23 01:10:51.594618 containerd[1975]: time="2026-01-23T01:10:51.594572351Z" level=info msg="connecting to shim 5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04" address="unix:///run/containerd/s/c6baadcab3aa752b4009e867f364df39bf592cb63b9819c06cb0b0be9a320341" protocol=ttrpc version=3 Jan 23 01:10:51.643697 systemd[1]: Started cri-containerd-5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04.scope - libcontainer container 5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04. Jan 23 01:10:51.725879 containerd[1975]: time="2026-01-23T01:10:51.725836851Z" level=info msg="StartContainer for \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\" returns successfully" Jan 23 01:10:51.738067 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:10:51.738293 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:10:51.741998 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:10:51.744723 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:10:51.748311 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:10:51.750485 containerd[1975]: time="2026-01-23T01:10:51.750083697Z" level=info msg="received container exit event container_id:\"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\" id:\"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\" pid:3969 exited_at:{seconds:1769130651 nanos:749828614}" Jan 23 01:10:51.750209 systemd[1]: cri-containerd-5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04.scope: Deactivated successfully. Jan 23 01:10:51.791581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:10:52.040157 containerd[1975]: time="2026-01-23T01:10:52.040105684Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:52.041288 containerd[1975]: time="2026-01-23T01:10:52.041239615Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 01:10:52.042570 containerd[1975]: time="2026-01-23T01:10:52.042516849Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:52.043897 containerd[1975]: time="2026-01-23T01:10:52.043604456Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.774167208s" Jan 23 01:10:52.043897 containerd[1975]: time="2026-01-23T01:10:52.043634889Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 01:10:52.056816 containerd[1975]: time="2026-01-23T01:10:52.056769144Z" level=info msg="CreateContainer within sandbox \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 01:10:52.071104 containerd[1975]: time="2026-01-23T01:10:52.071052384Z" level=info msg="Container 25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:52.090519 containerd[1975]: time="2026-01-23T01:10:52.090470872Z" level=info msg="CreateContainer within sandbox \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\"" Jan 23 01:10:52.091266 containerd[1975]: time="2026-01-23T01:10:52.091233419Z" level=info msg="StartContainer for \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\"" Jan 23 01:10:52.092318 containerd[1975]: time="2026-01-23T01:10:52.092286359Z" level=info msg="connecting to shim 25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894" address="unix:///run/containerd/s/5f933e55f5544016a33c9350face3493be080967c61531c2c507af6141ba9deb" protocol=ttrpc version=3 Jan 23 01:10:52.117615 systemd[1]: Started cri-containerd-25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894.scope - libcontainer container 25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894. Jan 23 01:10:52.158267 containerd[1975]: time="2026-01-23T01:10:52.158226054Z" level=info msg="StartContainer for \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\" returns successfully" Jan 23 01:10:52.322002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04-rootfs.mount: Deactivated successfully. Jan 23 01:10:52.559295 containerd[1975]: time="2026-01-23T01:10:52.559259094Z" level=info msg="CreateContainer within sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:10:52.640676 containerd[1975]: time="2026-01-23T01:10:52.640566660Z" level=info msg="Container 8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:52.663361 containerd[1975]: time="2026-01-23T01:10:52.661985678Z" level=info msg="CreateContainer within sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\"" Jan 23 01:10:52.664563 containerd[1975]: time="2026-01-23T01:10:52.664516668Z" level=info msg="StartContainer for \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\"" Jan 23 01:10:52.667304 containerd[1975]: time="2026-01-23T01:10:52.667262125Z" level=info msg="connecting to shim 8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd" address="unix:///run/containerd/s/c6baadcab3aa752b4009e867f364df39bf592cb63b9819c06cb0b0be9a320341" protocol=ttrpc version=3 Jan 23 01:10:52.684508 kubelet[3314]: I0123 01:10:52.684086 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9jsvr" podStartSLOduration=1.978111657 podStartE2EDuration="11.682050572s" podCreationTimestamp="2026-01-23 01:10:41 +0000 UTC" firstStartedPulling="2026-01-23 01:10:42.340632078 +0000 UTC m=+5.178192320" lastFinishedPulling="2026-01-23 01:10:52.044571001 +0000 UTC m=+14.882131235" observedRunningTime="2026-01-23 01:10:52.604944663 +0000 UTC m=+15.442504924" watchObservedRunningTime="2026-01-23 01:10:52.682050572 +0000 UTC m=+15.519610930" Jan 23 01:10:52.719197 systemd[1]: Started cri-containerd-8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd.scope - libcontainer container 8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd. Jan 23 01:10:52.902217 containerd[1975]: time="2026-01-23T01:10:52.902090796Z" level=info msg="StartContainer for \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\" returns successfully" Jan 23 01:10:52.952999 systemd[1]: cri-containerd-8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd.scope: Deactivated successfully. Jan 23 01:10:52.953422 systemd[1]: cri-containerd-8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd.scope: Consumed 47ms CPU time, 4.5M memory peak, 1.6M read from disk. Jan 23 01:10:52.963396 containerd[1975]: time="2026-01-23T01:10:52.963015893Z" level=info msg="received container exit event container_id:\"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\" id:\"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\" pid:4052 exited_at:{seconds:1769130652 nanos:962513129}" Jan 23 01:10:53.024914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd-rootfs.mount: Deactivated successfully. Jan 23 01:10:53.567372 containerd[1975]: time="2026-01-23T01:10:53.567053948Z" level=info msg="CreateContainer within sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:10:53.582942 containerd[1975]: time="2026-01-23T01:10:53.582897237Z" level=info msg="Container d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:53.590307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285740561.mount: Deactivated successfully. Jan 23 01:10:53.603540 containerd[1975]: time="2026-01-23T01:10:53.603489434Z" level=info msg="CreateContainer within sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\"" Jan 23 01:10:53.604696 containerd[1975]: time="2026-01-23T01:10:53.604431545Z" level=info msg="StartContainer for \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\"" Jan 23 01:10:53.606248 containerd[1975]: time="2026-01-23T01:10:53.606213438Z" level=info msg="connecting to shim d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130" address="unix:///run/containerd/s/c6baadcab3aa752b4009e867f364df39bf592cb63b9819c06cb0b0be9a320341" protocol=ttrpc version=3 Jan 23 01:10:53.631648 systemd[1]: Started cri-containerd-d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130.scope - libcontainer container d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130. Jan 23 01:10:53.684356 containerd[1975]: time="2026-01-23T01:10:53.684292708Z" level=info msg="StartContainer for \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\" returns successfully" Jan 23 01:10:53.698745 systemd[1]: cri-containerd-d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130.scope: Deactivated successfully. Jan 23 01:10:53.700093 containerd[1975]: time="2026-01-23T01:10:53.700056895Z" level=info msg="received container exit event container_id:\"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\" id:\"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\" pid:4091 exited_at:{seconds:1769130653 nanos:699354796}" Jan 23 01:10:53.725603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130-rootfs.mount: Deactivated successfully. Jan 23 01:10:54.579361 containerd[1975]: time="2026-01-23T01:10:54.578818723Z" level=info msg="CreateContainer within sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:10:54.596990 containerd[1975]: time="2026-01-23T01:10:54.594693570Z" level=info msg="Container c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:54.609970 containerd[1975]: time="2026-01-23T01:10:54.609923046Z" level=info msg="CreateContainer within sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\"" Jan 23 01:10:54.610637 containerd[1975]: time="2026-01-23T01:10:54.610496528Z" level=info msg="StartContainer for \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\"" Jan 23 01:10:54.612414 containerd[1975]: time="2026-01-23T01:10:54.612382117Z" level=info msg="connecting to shim c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0" address="unix:///run/containerd/s/c6baadcab3aa752b4009e867f364df39bf592cb63b9819c06cb0b0be9a320341" protocol=ttrpc version=3 Jan 23 01:10:54.644579 systemd[1]: Started cri-containerd-c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0.scope - libcontainer container c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0. Jan 23 01:10:54.702034 containerd[1975]: time="2026-01-23T01:10:54.701993719Z" level=info msg="StartContainer for \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\" returns successfully" Jan 23 01:10:54.945435 kubelet[3314]: I0123 01:10:54.945044 3314 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:10:55.002049 systemd[1]: Created slice kubepods-burstable-pod7482c982_41a1_4866_a779_f22e7f9520aa.slice - libcontainer container kubepods-burstable-pod7482c982_41a1_4866_a779_f22e7f9520aa.slice. Jan 23 01:10:55.016577 systemd[1]: Created slice kubepods-burstable-pod0828d38c_fa49_4120_883e_a42d6bb41848.slice - libcontainer container kubepods-burstable-pod0828d38c_fa49_4120_883e_a42d6bb41848.slice. Jan 23 01:10:55.105946 kubelet[3314]: I0123 01:10:55.105881 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd684\" (UniqueName: \"kubernetes.io/projected/0828d38c-fa49-4120-883e-a42d6bb41848-kube-api-access-kd684\") pod \"coredns-668d6bf9bc-n6cg8\" (UID: \"0828d38c-fa49-4120-883e-a42d6bb41848\") " pod="kube-system/coredns-668d6bf9bc-n6cg8" Jan 23 01:10:55.105946 kubelet[3314]: I0123 01:10:55.105928 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7482c982-41a1-4866-a779-f22e7f9520aa-config-volume\") pod \"coredns-668d6bf9bc-7xxgn\" (UID: \"7482c982-41a1-4866-a779-f22e7f9520aa\") " pod="kube-system/coredns-668d6bf9bc-7xxgn" Jan 23 01:10:55.105946 kubelet[3314]: I0123 01:10:55.105953 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0828d38c-fa49-4120-883e-a42d6bb41848-config-volume\") pod \"coredns-668d6bf9bc-n6cg8\" (UID: \"0828d38c-fa49-4120-883e-a42d6bb41848\") " pod="kube-system/coredns-668d6bf9bc-n6cg8" Jan 23 01:10:55.106191 kubelet[3314]: I0123 01:10:55.105969 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/7482c982-41a1-4866-a779-f22e7f9520aa-kube-api-access-vpvb2\") pod \"coredns-668d6bf9bc-7xxgn\" (UID: \"7482c982-41a1-4866-a779-f22e7f9520aa\") " pod="kube-system/coredns-668d6bf9bc-7xxgn" Jan 23 01:10:55.309941 containerd[1975]: time="2026-01-23T01:10:55.309811311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7xxgn,Uid:7482c982-41a1-4866-a779-f22e7f9520aa,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:55.326223 containerd[1975]: time="2026-01-23T01:10:55.326182557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n6cg8,Uid:0828d38c-fa49-4120-883e-a42d6bb41848,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:55.603482 kubelet[3314]: I0123 01:10:55.603242 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-89dp9" podStartSLOduration=6.496092395 podStartE2EDuration="14.603223705s" podCreationTimestamp="2026-01-23 01:10:41 +0000 UTC" firstStartedPulling="2026-01-23 01:10:42.161225393 +0000 UTC m=+4.998785629" lastFinishedPulling="2026-01-23 01:10:50.26835669 +0000 UTC m=+13.105916939" observedRunningTime="2026-01-23 01:10:55.603007689 +0000 UTC m=+18.440567944" watchObservedRunningTime="2026-01-23 01:10:55.603223705 +0000 UTC m=+18.440783944" Jan 23 01:10:57.385270 (udev-worker)[4218]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:10:57.385658 (udev-worker)[4219]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:10:57.388467 systemd-networkd[1858]: cilium_host: Link UP Jan 23 01:10:57.388615 systemd-networkd[1858]: cilium_net: Link UP Jan 23 01:10:57.388765 systemd-networkd[1858]: cilium_net: Gained carrier Jan 23 01:10:57.388926 systemd-networkd[1858]: cilium_host: Gained carrier Jan 23 01:10:57.543062 (udev-worker)[4266]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:10:57.551116 systemd-networkd[1858]: cilium_vxlan: Link UP Jan 23 01:10:57.551139 systemd-networkd[1858]: cilium_vxlan: Gained carrier Jan 23 01:10:57.839643 systemd-networkd[1858]: cilium_net: Gained IPv6LL Jan 23 01:10:58.112193 systemd-networkd[1858]: cilium_host: Gained IPv6LL Jan 23 01:10:58.274371 kernel: NET: Registered PF_ALG protocol family Jan 23 01:10:59.074470 systemd-networkd[1858]: cilium_vxlan: Gained IPv6LL Jan 23 01:10:59.092071 (udev-worker)[4268]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:10:59.095480 systemd-networkd[1858]: lxc_health: Link UP Jan 23 01:10:59.116987 systemd-networkd[1858]: lxc_health: Gained carrier Jan 23 01:10:59.422354 kernel: eth0: renamed from tmpcf1d1 Jan 23 01:10:59.424517 systemd-networkd[1858]: lxc37e5fa5e3511: Link UP Jan 23 01:10:59.424870 systemd-networkd[1858]: lxc37e5fa5e3511: Gained carrier Jan 23 01:10:59.427189 systemd-networkd[1858]: lxcaf55ac790676: Link UP Jan 23 01:10:59.435354 kernel: eth0: renamed from tmp1ce8e Jan 23 01:10:59.441246 systemd-networkd[1858]: lxcaf55ac790676: Gained carrier Jan 23 01:11:00.543586 systemd-networkd[1858]: lxc_health: Gained IPv6LL Jan 23 01:11:00.863524 systemd-networkd[1858]: lxcaf55ac790676: Gained IPv6LL Jan 23 01:11:01.119495 systemd-networkd[1858]: lxc37e5fa5e3511: Gained IPv6LL Jan 23 01:11:03.599423 ntpd[2169]: Listen normally on 6 cilium_host 192.168.0.68:123 Jan 23 01:11:03.600245 ntpd[2169]: 23 Jan 01:11:03 ntpd[2169]: Listen normally on 6 cilium_host 192.168.0.68:123 Jan 23 01:11:03.600245 ntpd[2169]: 23 Jan 01:11:03 ntpd[2169]: Listen normally on 7 cilium_net [fe80::604b:e9ff:fe87:ec78%4]:123 Jan 23 01:11:03.600245 ntpd[2169]: 23 Jan 01:11:03 ntpd[2169]: Listen normally on 8 cilium_host [fe80::c10:84ff:fe73:b4f9%5]:123 Jan 23 01:11:03.600245 ntpd[2169]: 23 Jan 01:11:03 ntpd[2169]: Listen normally on 9 cilium_vxlan [fe80::e872:17ff:feed:bbfb%6]:123 Jan 23 01:11:03.600245 ntpd[2169]: 23 Jan 01:11:03 ntpd[2169]: Listen normally on 10 lxc_health [fe80::7483:8aff:fe91:1758%8]:123 Jan 23 01:11:03.600245 ntpd[2169]: 23 Jan 01:11:03 ntpd[2169]: Listen normally on 11 lxcaf55ac790676 [fe80::440f:27ff:feb0:4e39%10]:123 Jan 23 01:11:03.600245 ntpd[2169]: 23 Jan 01:11:03 ntpd[2169]: Listen normally on 12 lxc37e5fa5e3511 [fe80::6caa:edff:fe8e:c799%12]:123 Jan 23 01:11:03.599496 ntpd[2169]: Listen normally on 7 cilium_net [fe80::604b:e9ff:fe87:ec78%4]:123 Jan 23 01:11:03.599527 ntpd[2169]: Listen normally on 8 cilium_host [fe80::c10:84ff:fe73:b4f9%5]:123 Jan 23 01:11:03.599559 ntpd[2169]: Listen normally on 9 cilium_vxlan [fe80::e872:17ff:feed:bbfb%6]:123 Jan 23 01:11:03.599585 ntpd[2169]: Listen normally on 10 lxc_health [fe80::7483:8aff:fe91:1758%8]:123 Jan 23 01:11:03.599610 ntpd[2169]: Listen normally on 11 lxcaf55ac790676 [fe80::440f:27ff:feb0:4e39%10]:123 Jan 23 01:11:03.599637 ntpd[2169]: Listen normally on 12 lxc37e5fa5e3511 [fe80::6caa:edff:fe8e:c799%12]:123 Jan 23 01:11:05.680427 containerd[1975]: time="2026-01-23T01:11:05.679757133Z" level=info msg="connecting to shim cf1d14049667d5198c93e2806925243c36ce8bbae96dc5032d9061e9246c6ffd" address="unix:///run/containerd/s/19259eb9b6fde0a972592a544443c15002af04030762dd0a09bd10b3fdccd229" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:05.712553 containerd[1975]: time="2026-01-23T01:11:05.712497436Z" level=info msg="connecting to shim 1ce8e231bc458992b001a1f520fd9b88b06a262587f558fd15c3022eac3b70e7" address="unix:///run/containerd/s/64382e153d97a8303bc910abdd1aba7c76550c46af93430016ec02c43bb522c1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:05.721950 systemd[1]: Started cri-containerd-cf1d14049667d5198c93e2806925243c36ce8bbae96dc5032d9061e9246c6ffd.scope - libcontainer container cf1d14049667d5198c93e2806925243c36ce8bbae96dc5032d9061e9246c6ffd. Jan 23 01:11:05.778776 systemd[1]: Started cri-containerd-1ce8e231bc458992b001a1f520fd9b88b06a262587f558fd15c3022eac3b70e7.scope - libcontainer container 1ce8e231bc458992b001a1f520fd9b88b06a262587f558fd15c3022eac3b70e7. Jan 23 01:11:05.884092 containerd[1975]: time="2026-01-23T01:11:05.883986326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n6cg8,Uid:0828d38c-fa49-4120-883e-a42d6bb41848,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf1d14049667d5198c93e2806925243c36ce8bbae96dc5032d9061e9246c6ffd\"" Jan 23 01:11:05.890644 containerd[1975]: time="2026-01-23T01:11:05.890604776Z" level=info msg="CreateContainer within sandbox \"cf1d14049667d5198c93e2806925243c36ce8bbae96dc5032d9061e9246c6ffd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:11:05.938773 containerd[1975]: time="2026-01-23T01:11:05.938215461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7xxgn,Uid:7482c982-41a1-4866-a779-f22e7f9520aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ce8e231bc458992b001a1f520fd9b88b06a262587f558fd15c3022eac3b70e7\"" Jan 23 01:11:05.942353 containerd[1975]: time="2026-01-23T01:11:05.942299343Z" level=info msg="CreateContainer within sandbox \"1ce8e231bc458992b001a1f520fd9b88b06a262587f558fd15c3022eac3b70e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:11:05.967347 containerd[1975]: time="2026-01-23T01:11:05.967115104Z" level=info msg="Container ddbe31e961959be27f6376b0855fe4a8924330781f6b329be8b1519a529d663c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:05.967347 containerd[1975]: time="2026-01-23T01:11:05.967140444Z" level=info msg="Container 72f1734dd7412f8993f5c3416091531bace295e642b2d62aa89e5d15da0a9ed5: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:05.975264 containerd[1975]: time="2026-01-23T01:11:05.975193610Z" level=info msg="CreateContainer within sandbox \"1ce8e231bc458992b001a1f520fd9b88b06a262587f558fd15c3022eac3b70e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72f1734dd7412f8993f5c3416091531bace295e642b2d62aa89e5d15da0a9ed5\"" Jan 23 01:11:05.976709 containerd[1975]: time="2026-01-23T01:11:05.976683941Z" level=info msg="StartContainer for \"72f1734dd7412f8993f5c3416091531bace295e642b2d62aa89e5d15da0a9ed5\"" Jan 23 01:11:05.977982 containerd[1975]: time="2026-01-23T01:11:05.977859158Z" level=info msg="CreateContainer within sandbox \"cf1d14049667d5198c93e2806925243c36ce8bbae96dc5032d9061e9246c6ffd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddbe31e961959be27f6376b0855fe4a8924330781f6b329be8b1519a529d663c\"" Jan 23 01:11:05.979260 containerd[1975]: time="2026-01-23T01:11:05.978540751Z" level=info msg="StartContainer for \"ddbe31e961959be27f6376b0855fe4a8924330781f6b329be8b1519a529d663c\"" Jan 23 01:11:05.979260 containerd[1975]: time="2026-01-23T01:11:05.979205451Z" level=info msg="connecting to shim ddbe31e961959be27f6376b0855fe4a8924330781f6b329be8b1519a529d663c" address="unix:///run/containerd/s/19259eb9b6fde0a972592a544443c15002af04030762dd0a09bd10b3fdccd229" protocol=ttrpc version=3 Jan 23 01:11:05.979588 containerd[1975]: time="2026-01-23T01:11:05.979570067Z" level=info msg="connecting to shim 72f1734dd7412f8993f5c3416091531bace295e642b2d62aa89e5d15da0a9ed5" address="unix:///run/containerd/s/64382e153d97a8303bc910abdd1aba7c76550c46af93430016ec02c43bb522c1" protocol=ttrpc version=3 Jan 23 01:11:06.003668 systemd[1]: Started cri-containerd-72f1734dd7412f8993f5c3416091531bace295e642b2d62aa89e5d15da0a9ed5.scope - libcontainer container 72f1734dd7412f8993f5c3416091531bace295e642b2d62aa89e5d15da0a9ed5. Jan 23 01:11:06.016575 systemd[1]: Started cri-containerd-ddbe31e961959be27f6376b0855fe4a8924330781f6b329be8b1519a529d663c.scope - libcontainer container ddbe31e961959be27f6376b0855fe4a8924330781f6b329be8b1519a529d663c. Jan 23 01:11:06.079702 containerd[1975]: time="2026-01-23T01:11:06.079660179Z" level=info msg="StartContainer for \"ddbe31e961959be27f6376b0855fe4a8924330781f6b329be8b1519a529d663c\" returns successfully" Jan 23 01:11:06.080097 containerd[1975]: time="2026-01-23T01:11:06.080077787Z" level=info msg="StartContainer for \"72f1734dd7412f8993f5c3416091531bace295e642b2d62aa89e5d15da0a9ed5\" returns successfully" Jan 23 01:11:06.649954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount879164764.mount: Deactivated successfully. Jan 23 01:11:06.681405 kubelet[3314]: I0123 01:11:06.681153 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7xxgn" podStartSLOduration=25.681122086 podStartE2EDuration="25.681122086s" podCreationTimestamp="2026-01-23 01:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:11:06.680317364 +0000 UTC m=+29.517877623" watchObservedRunningTime="2026-01-23 01:11:06.681122086 +0000 UTC m=+29.518682344" Jan 23 01:11:06.698844 kubelet[3314]: I0123 01:11:06.698196 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n6cg8" podStartSLOduration=25.6981758 podStartE2EDuration="25.6981758s" podCreationTimestamp="2026-01-23 01:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:11:06.696812502 +0000 UTC m=+29.534372760" watchObservedRunningTime="2026-01-23 01:11:06.6981758 +0000 UTC m=+29.535736056" Jan 23 01:11:09.837425 kubelet[3314]: I0123 01:11:09.837384 3314 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:11:17.192685 systemd[1]: Started sshd@7-172.31.21.166:22-68.220.241.50:60268.service - OpenSSH per-connection server daemon (68.220.241.50:60268). Jan 23 01:11:17.735444 sshd[4804]: Accepted publickey for core from 68.220.241.50 port 60268 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:17.738047 sshd-session[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:17.753400 systemd-logind[1956]: New session 8 of user core. Jan 23 01:11:17.758952 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:11:18.705292 sshd[4807]: Connection closed by 68.220.241.50 port 60268 Jan 23 01:11:18.717800 systemd[1]: sshd@7-172.31.21.166:22-68.220.241.50:60268.service: Deactivated successfully. Jan 23 01:11:18.707002 sshd-session[4804]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:18.722248 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:11:18.723965 systemd-logind[1956]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:11:18.726290 systemd-logind[1956]: Removed session 8. Jan 23 01:11:23.799824 systemd[1]: Started sshd@8-172.31.21.166:22-68.220.241.50:43466.service - OpenSSH per-connection server daemon (68.220.241.50:43466). Jan 23 01:11:24.305354 sshd[4822]: Accepted publickey for core from 68.220.241.50 port 43466 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:24.306736 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:24.312713 systemd-logind[1956]: New session 9 of user core. Jan 23 01:11:24.318571 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:11:24.753626 sshd[4825]: Connection closed by 68.220.241.50 port 43466 Jan 23 01:11:24.754276 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:24.759191 systemd[1]: sshd@8-172.31.21.166:22-68.220.241.50:43466.service: Deactivated successfully. Jan 23 01:11:24.761159 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:11:24.764491 systemd-logind[1956]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:11:24.768524 systemd-logind[1956]: Removed session 9. Jan 23 01:11:29.846349 systemd[1]: Started sshd@9-172.31.21.166:22-68.220.241.50:43478.service - OpenSSH per-connection server daemon (68.220.241.50:43478). Jan 23 01:11:30.364963 sshd[4839]: Accepted publickey for core from 68.220.241.50 port 43478 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:30.365701 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:30.372033 systemd-logind[1956]: New session 10 of user core. Jan 23 01:11:30.381586 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:11:30.801714 sshd[4843]: Connection closed by 68.220.241.50 port 43478 Jan 23 01:11:30.802426 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:30.807582 systemd[1]: sshd@9-172.31.21.166:22-68.220.241.50:43478.service: Deactivated successfully. Jan 23 01:11:30.809917 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:11:30.811112 systemd-logind[1956]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:11:30.812551 systemd-logind[1956]: Removed session 10. Jan 23 01:11:35.890826 systemd[1]: Started sshd@10-172.31.21.166:22-68.220.241.50:56442.service - OpenSSH per-connection server daemon (68.220.241.50:56442). Jan 23 01:11:36.399588 sshd[4856]: Accepted publickey for core from 68.220.241.50 port 56442 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:36.401170 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:36.407428 systemd-logind[1956]: New session 11 of user core. Jan 23 01:11:36.411546 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:11:36.859533 sshd[4859]: Connection closed by 68.220.241.50 port 56442 Jan 23 01:11:36.890171 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:36.894460 systemd[1]: sshd@10-172.31.21.166:22-68.220.241.50:56442.service: Deactivated successfully. Jan 23 01:11:36.897152 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:11:36.899273 systemd-logind[1956]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:11:36.901824 systemd-logind[1956]: Removed session 11. Jan 23 01:11:36.964414 systemd[1]: Started sshd@11-172.31.21.166:22-68.220.241.50:56446.service - OpenSSH per-connection server daemon (68.220.241.50:56446). Jan 23 01:11:37.515708 sshd[4871]: Accepted publickey for core from 68.220.241.50 port 56446 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:37.517366 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:37.523027 systemd-logind[1956]: New session 12 of user core. Jan 23 01:11:37.530566 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:11:38.105795 sshd[4876]: Connection closed by 68.220.241.50 port 56446 Jan 23 01:11:38.106630 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:38.111006 systemd[1]: sshd@11-172.31.21.166:22-68.220.241.50:56446.service: Deactivated successfully. Jan 23 01:11:38.113373 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:11:38.114249 systemd-logind[1956]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:11:38.115933 systemd-logind[1956]: Removed session 12. Jan 23 01:11:38.186805 systemd[1]: Started sshd@12-172.31.21.166:22-68.220.241.50:56462.service - OpenSSH per-connection server daemon (68.220.241.50:56462). Jan 23 01:11:38.681908 sshd[4886]: Accepted publickey for core from 68.220.241.50 port 56462 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:38.683484 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:38.689949 systemd-logind[1956]: New session 13 of user core. Jan 23 01:11:38.695616 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:11:39.104614 sshd[4889]: Connection closed by 68.220.241.50 port 56462 Jan 23 01:11:39.105599 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:39.110258 systemd[1]: sshd@12-172.31.21.166:22-68.220.241.50:56462.service: Deactivated successfully. Jan 23 01:11:39.112643 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:11:39.115883 systemd-logind[1956]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:11:39.117684 systemd-logind[1956]: Removed session 13. Jan 23 01:11:44.198686 systemd[1]: Started sshd@13-172.31.21.166:22-68.220.241.50:36882.service - OpenSSH per-connection server daemon (68.220.241.50:36882). Jan 23 01:11:44.691116 sshd[4903]: Accepted publickey for core from 68.220.241.50 port 36882 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:44.692454 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:44.698419 systemd-logind[1956]: New session 14 of user core. Jan 23 01:11:44.704680 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:11:45.117844 sshd[4906]: Connection closed by 68.220.241.50 port 36882 Jan 23 01:11:45.118596 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:45.124780 systemd[1]: sshd@13-172.31.21.166:22-68.220.241.50:36882.service: Deactivated successfully. Jan 23 01:11:45.127948 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:11:45.129249 systemd-logind[1956]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:11:45.131783 systemd-logind[1956]: Removed session 14. Jan 23 01:11:45.206967 systemd[1]: Started sshd@14-172.31.21.166:22-68.220.241.50:36888.service - OpenSSH per-connection server daemon (68.220.241.50:36888). Jan 23 01:11:45.707612 sshd[4918]: Accepted publickey for core from 68.220.241.50 port 36888 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:45.709257 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:45.715560 systemd-logind[1956]: New session 15 of user core. Jan 23 01:11:45.725558 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:11:48.969435 sshd[4921]: Connection closed by 68.220.241.50 port 36888 Jan 23 01:11:48.970639 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:48.987457 systemd[1]: sshd@14-172.31.21.166:22-68.220.241.50:36888.service: Deactivated successfully. Jan 23 01:11:48.996078 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:11:48.999696 systemd-logind[1956]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:11:49.001646 systemd-logind[1956]: Removed session 15. Jan 23 01:11:49.075242 systemd[1]: Started sshd@15-172.31.21.166:22-68.220.241.50:36904.service - OpenSSH per-connection server daemon (68.220.241.50:36904). Jan 23 01:11:49.646126 sshd[4931]: Accepted publickey for core from 68.220.241.50 port 36904 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:49.647673 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:49.654447 systemd-logind[1956]: New session 16 of user core. Jan 23 01:11:49.658580 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:11:50.849468 sshd[4934]: Connection closed by 68.220.241.50 port 36904 Jan 23 01:11:50.850961 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:50.855956 systemd[1]: sshd@15-172.31.21.166:22-68.220.241.50:36904.service: Deactivated successfully. Jan 23 01:11:50.859275 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:11:50.861466 systemd-logind[1956]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:11:50.863067 systemd-logind[1956]: Removed session 16. Jan 23 01:11:50.928726 systemd[1]: Started sshd@16-172.31.21.166:22-68.220.241.50:36908.service - OpenSSH per-connection server daemon (68.220.241.50:36908). Jan 23 01:11:51.441114 sshd[4952]: Accepted publickey for core from 68.220.241.50 port 36908 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:51.442394 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:51.449402 systemd-logind[1956]: New session 17 of user core. Jan 23 01:11:51.459600 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:11:52.045513 sshd[4955]: Connection closed by 68.220.241.50 port 36908 Jan 23 01:11:52.047368 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:52.053431 systemd-logind[1956]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:11:52.053985 systemd[1]: sshd@16-172.31.21.166:22-68.220.241.50:36908.service: Deactivated successfully. Jan 23 01:11:52.056860 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:11:52.059429 systemd-logind[1956]: Removed session 17. Jan 23 01:11:52.134133 systemd[1]: Started sshd@17-172.31.21.166:22-68.220.241.50:36914.service - OpenSSH per-connection server daemon (68.220.241.50:36914). Jan 23 01:11:52.626586 sshd[4965]: Accepted publickey for core from 68.220.241.50 port 36914 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:52.628039 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:52.633145 systemd-logind[1956]: New session 18 of user core. Jan 23 01:11:52.638535 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:11:53.054162 sshd[4968]: Connection closed by 68.220.241.50 port 36914 Jan 23 01:11:53.055600 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:53.061387 systemd[1]: sshd@17-172.31.21.166:22-68.220.241.50:36914.service: Deactivated successfully. Jan 23 01:11:53.063577 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:11:53.065971 systemd-logind[1956]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:11:53.066985 systemd-logind[1956]: Removed session 18. Jan 23 01:11:58.141884 systemd[1]: Started sshd@18-172.31.21.166:22-68.220.241.50:54638.service - OpenSSH per-connection server daemon (68.220.241.50:54638). Jan 23 01:11:58.637848 sshd[4982]: Accepted publickey for core from 68.220.241.50 port 54638 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:11:58.639212 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:58.645896 systemd-logind[1956]: New session 19 of user core. Jan 23 01:11:58.651552 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:11:59.060535 sshd[4985]: Connection closed by 68.220.241.50 port 54638 Jan 23 01:11:59.062906 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:59.067766 systemd[1]: sshd@18-172.31.21.166:22-68.220.241.50:54638.service: Deactivated successfully. Jan 23 01:11:59.070303 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:11:59.071643 systemd-logind[1956]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:11:59.073709 systemd-logind[1956]: Removed session 19. Jan 23 01:12:04.152277 systemd[1]: Started sshd@19-172.31.21.166:22-68.220.241.50:45162.service - OpenSSH per-connection server daemon (68.220.241.50:45162). Jan 23 01:12:04.656016 sshd[4997]: Accepted publickey for core from 68.220.241.50 port 45162 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:12:04.657791 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:04.664140 systemd-logind[1956]: New session 20 of user core. Jan 23 01:12:04.671714 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:12:05.082963 sshd[5001]: Connection closed by 68.220.241.50 port 45162 Jan 23 01:12:05.086397 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:05.091304 systemd-logind[1956]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:12:05.092199 systemd[1]: sshd@19-172.31.21.166:22-68.220.241.50:45162.service: Deactivated successfully. Jan 23 01:12:05.094719 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:12:05.096651 systemd-logind[1956]: Removed session 20. Jan 23 01:12:10.186253 systemd[1]: Started sshd@20-172.31.21.166:22-68.220.241.50:45176.service - OpenSSH per-connection server daemon (68.220.241.50:45176). Jan 23 01:12:10.725839 sshd[5013]: Accepted publickey for core from 68.220.241.50 port 45176 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:12:10.728021 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:10.734408 systemd-logind[1956]: New session 21 of user core. Jan 23 01:12:10.743538 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:12:11.179555 sshd[5016]: Connection closed by 68.220.241.50 port 45176 Jan 23 01:12:11.180563 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:11.185609 systemd[1]: sshd@20-172.31.21.166:22-68.220.241.50:45176.service: Deactivated successfully. Jan 23 01:12:11.188117 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:12:11.190277 systemd-logind[1956]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:12:11.192154 systemd-logind[1956]: Removed session 21. Jan 23 01:12:11.260616 systemd[1]: Started sshd@21-172.31.21.166:22-68.220.241.50:45190.service - OpenSSH per-connection server daemon (68.220.241.50:45190). Jan 23 01:12:11.780176 sshd[5028]: Accepted publickey for core from 68.220.241.50 port 45190 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:12:11.781984 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:11.788418 systemd-logind[1956]: New session 22 of user core. Jan 23 01:12:11.796604 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:12:14.184033 containerd[1975]: time="2026-01-23T01:12:14.183836909Z" level=info msg="StopContainer for \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\" with timeout 30 (s)" Jan 23 01:12:14.200167 containerd[1975]: time="2026-01-23T01:12:14.200105556Z" level=info msg="Stop container \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\" with signal terminated" Jan 23 01:12:14.215131 systemd[1]: cri-containerd-25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894.scope: Deactivated successfully. Jan 23 01:12:14.217814 containerd[1975]: time="2026-01-23T01:12:14.217687700Z" level=info msg="received container exit event container_id:\"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\" id:\"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\" pid:4019 exited_at:{seconds:1769130734 nanos:216866382}" Jan 23 01:12:14.249766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894-rootfs.mount: Deactivated successfully. Jan 23 01:12:14.284537 containerd[1975]: time="2026-01-23T01:12:14.284427921Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:12:14.290570 containerd[1975]: time="2026-01-23T01:12:14.290224608Z" level=info msg="StopContainer for \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\" returns successfully" Jan 23 01:12:14.291580 containerd[1975]: time="2026-01-23T01:12:14.291544042Z" level=info msg="StopPodSandbox for \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\"" Jan 23 01:12:14.291686 containerd[1975]: time="2026-01-23T01:12:14.291631239Z" level=info msg="Container to stop \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:14.298639 containerd[1975]: time="2026-01-23T01:12:14.298488316Z" level=info msg="StopContainer for \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\" with timeout 2 (s)" Jan 23 01:12:14.298818 containerd[1975]: time="2026-01-23T01:12:14.298793551Z" level=info msg="Stop container \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\" with signal terminated" Jan 23 01:12:14.303094 systemd[1]: cri-containerd-17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3.scope: Deactivated successfully. Jan 23 01:12:14.305946 containerd[1975]: time="2026-01-23T01:12:14.305897242Z" level=info msg="received sandbox exit event container_id:\"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" id:\"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" exit_status:137 exited_at:{seconds:1769130734 nanos:305498157}" monitor_name=podsandbox Jan 23 01:12:14.317027 systemd-networkd[1858]: lxc_health: Link DOWN Jan 23 01:12:14.318174 systemd-networkd[1858]: lxc_health: Lost carrier Jan 23 01:12:14.345479 systemd[1]: cri-containerd-c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0.scope: Deactivated successfully. Jan 23 01:12:14.345890 systemd[1]: cri-containerd-c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0.scope: Consumed 8.589s CPU time, 223M memory peak, 102.8M read from disk, 13.3M written to disk. Jan 23 01:12:14.376843 containerd[1975]: time="2026-01-23T01:12:14.349707597Z" level=info msg="received container exit event container_id:\"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\" id:\"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\" pid:4129 exited_at:{seconds:1769130734 nanos:348614196}" Jan 23 01:12:14.377010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3-rootfs.mount: Deactivated successfully. Jan 23 01:12:14.390515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0-rootfs.mount: Deactivated successfully. Jan 23 01:12:14.397622 containerd[1975]: time="2026-01-23T01:12:14.397577770Z" level=info msg="shim disconnected" id=17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3 namespace=k8s.io Jan 23 01:12:14.397622 containerd[1975]: time="2026-01-23T01:12:14.397622993Z" level=warning msg="cleaning up after shim disconnected" id=17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3 namespace=k8s.io Jan 23 01:12:14.398191 containerd[1975]: time="2026-01-23T01:12:14.397633570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:12:14.402116 containerd[1975]: time="2026-01-23T01:12:14.401989568Z" level=info msg="StopContainer for \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\" returns successfully" Jan 23 01:12:14.403361 containerd[1975]: time="2026-01-23T01:12:14.403264708Z" level=info msg="StopPodSandbox for \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\"" Jan 23 01:12:14.403662 containerd[1975]: time="2026-01-23T01:12:14.403625839Z" level=info msg="Container to stop \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:14.403760 containerd[1975]: time="2026-01-23T01:12:14.403744396Z" level=info msg="Container to stop \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:14.403836 containerd[1975]: time="2026-01-23T01:12:14.403822662Z" level=info msg="Container to stop \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:14.403907 containerd[1975]: time="2026-01-23T01:12:14.403893377Z" level=info msg="Container to stop \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:14.403980 containerd[1975]: time="2026-01-23T01:12:14.403966333Z" level=info msg="Container to stop \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:14.417605 systemd[1]: cri-containerd-454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238.scope: Deactivated successfully. Jan 23 01:12:14.424085 containerd[1975]: time="2026-01-23T01:12:14.423972184Z" level=info msg="received sandbox exit event container_id:\"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" id:\"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" exit_status:137 exited_at:{seconds:1769130734 nanos:423531540}" monitor_name=podsandbox Jan 23 01:12:14.448025 containerd[1975]: time="2026-01-23T01:12:14.444712618Z" level=info msg="received sandbox container exit event sandbox_id:\"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" exit_status:137 exited_at:{seconds:1769130734 nanos:305498157}" monitor_name=criService Jan 23 01:12:14.451041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3-shm.mount: Deactivated successfully. Jan 23 01:12:14.463808 containerd[1975]: time="2026-01-23T01:12:14.463752081Z" level=info msg="TearDown network for sandbox \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" successfully" Jan 23 01:12:14.464137 containerd[1975]: time="2026-01-23T01:12:14.463983281Z" level=info msg="StopPodSandbox for \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" returns successfully" Jan 23 01:12:14.483018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238-rootfs.mount: Deactivated successfully. Jan 23 01:12:14.496577 containerd[1975]: time="2026-01-23T01:12:14.496418279Z" level=info msg="shim disconnected" id=454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238 namespace=k8s.io Jan 23 01:12:14.496577 containerd[1975]: time="2026-01-23T01:12:14.496549203Z" level=warning msg="cleaning up after shim disconnected" id=454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238 namespace=k8s.io Jan 23 01:12:14.496577 containerd[1975]: time="2026-01-23T01:12:14.496557733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:12:14.510819 containerd[1975]: time="2026-01-23T01:12:14.510734061Z" level=info msg="received sandbox container exit event sandbox_id:\"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" exit_status:137 exited_at:{seconds:1769130734 nanos:423531540}" monitor_name=criService Jan 23 01:12:14.511120 containerd[1975]: time="2026-01-23T01:12:14.511094959Z" level=info msg="TearDown network for sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" successfully" Jan 23 01:12:14.511229 containerd[1975]: time="2026-01-23T01:12:14.511212260Z" level=info msg="StopPodSandbox for \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" returns successfully" Jan 23 01:12:14.633789 kubelet[3314]: I0123 01:12:14.633732 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qccsr\" (UniqueName: \"kubernetes.io/projected/55d93638-13b7-406a-8971-2c9c72e13447-kube-api-access-qccsr\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.633789 kubelet[3314]: I0123 01:12:14.633793 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-etc-cni-netd\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.633789 kubelet[3314]: I0123 01:12:14.633824 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkjj6\" (UniqueName: \"kubernetes.io/projected/346b77ab-3aca-42c1-b651-a5ea5e392a72-kube-api-access-fkjj6\") pod \"346b77ab-3aca-42c1-b651-a5ea5e392a72\" (UID: \"346b77ab-3aca-42c1-b651-a5ea5e392a72\") " Jan 23 01:12:14.633789 kubelet[3314]: I0123 01:12:14.633850 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-bpf-maps\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.633789 kubelet[3314]: I0123 01:12:14.633869 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-xtables-lock\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.633789 kubelet[3314]: I0123 01:12:14.633890 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cni-path\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.634628 kubelet[3314]: I0123 01:12:14.633949 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55d93638-13b7-406a-8971-2c9c72e13447-clustermesh-secrets\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.634628 kubelet[3314]: I0123 01:12:14.633970 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cilium-cgroup\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.634628 kubelet[3314]: I0123 01:12:14.633996 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55d93638-13b7-406a-8971-2c9c72e13447-hubble-tls\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.634628 kubelet[3314]: I0123 01:12:14.634024 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55d93638-13b7-406a-8971-2c9c72e13447-cilium-config-path\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.634628 kubelet[3314]: I0123 01:12:14.634052 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/346b77ab-3aca-42c1-b651-a5ea5e392a72-cilium-config-path\") pod \"346b77ab-3aca-42c1-b651-a5ea5e392a72\" (UID: \"346b77ab-3aca-42c1-b651-a5ea5e392a72\") " Jan 23 01:12:14.634628 kubelet[3314]: I0123 01:12:14.634080 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-host-proc-sys-kernel\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.635271 kubelet[3314]: I0123 01:12:14.634100 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-host-proc-sys-net\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.635271 kubelet[3314]: I0123 01:12:14.634122 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-hostproc\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.635271 kubelet[3314]: I0123 01:12:14.634145 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cilium-run\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.635271 kubelet[3314]: I0123 01:12:14.634169 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-lib-modules\") pod \"55d93638-13b7-406a-8971-2c9c72e13447\" (UID: \"55d93638-13b7-406a-8971-2c9c72e13447\") " Jan 23 01:12:14.635271 kubelet[3314]: I0123 01:12:14.634251 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:14.635271 kubelet[3314]: I0123 01:12:14.634295 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:14.636077 kubelet[3314]: I0123 01:12:14.636027 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:14.636077 kubelet[3314]: I0123 01:12:14.636074 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:14.636214 kubelet[3314]: I0123 01:12:14.636095 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cni-path" (OuterVolumeSpecName: "cni-path") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:14.644487 kubelet[3314]: I0123 01:12:14.641442 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:14.644487 kubelet[3314]: I0123 01:12:14.641500 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:14.644487 kubelet[3314]: I0123 01:12:14.641522 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-hostproc" (OuterVolumeSpecName: "hostproc") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:14.644487 kubelet[3314]: I0123 01:12:14.641541 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:14.644487 kubelet[3314]: I0123 01:12:14.644012 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/346b77ab-3aca-42c1-b651-a5ea5e392a72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "346b77ab-3aca-42c1-b651-a5ea5e392a72" (UID: "346b77ab-3aca-42c1-b651-a5ea5e392a72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:12:14.646151 kubelet[3314]: I0123 01:12:14.646114 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:14.648211 kubelet[3314]: I0123 01:12:14.648175 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d93638-13b7-406a-8971-2c9c72e13447-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:12:14.652237 kubelet[3314]: I0123 01:12:14.652200 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d93638-13b7-406a-8971-2c9c72e13447-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:12:14.652589 kubelet[3314]: I0123 01:12:14.652473 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346b77ab-3aca-42c1-b651-a5ea5e392a72-kube-api-access-fkjj6" (OuterVolumeSpecName: "kube-api-access-fkjj6") pod "346b77ab-3aca-42c1-b651-a5ea5e392a72" (UID: "346b77ab-3aca-42c1-b651-a5ea5e392a72"). InnerVolumeSpecName "kube-api-access-fkjj6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:12:14.652686 kubelet[3314]: I0123 01:12:14.652546 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d93638-13b7-406a-8971-2c9c72e13447-kube-api-access-qccsr" (OuterVolumeSpecName: "kube-api-access-qccsr") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "kube-api-access-qccsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:12:14.652786 kubelet[3314]: I0123 01:12:14.652771 3314 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d93638-13b7-406a-8971-2c9c72e13447-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "55d93638-13b7-406a-8971-2c9c72e13447" (UID: "55d93638-13b7-406a-8971-2c9c72e13447"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:12:14.735265 kubelet[3314]: I0123 01:12:14.735161 3314 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qccsr\" (UniqueName: \"kubernetes.io/projected/55d93638-13b7-406a-8971-2c9c72e13447-kube-api-access-qccsr\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735451 kubelet[3314]: I0123 01:12:14.735435 3314 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-etc-cni-netd\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735520 kubelet[3314]: I0123 01:12:14.735511 3314 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cni-path\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735570 kubelet[3314]: I0123 01:12:14.735560 3314 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fkjj6\" (UniqueName: \"kubernetes.io/projected/346b77ab-3aca-42c1-b651-a5ea5e392a72-kube-api-access-fkjj6\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735612 kubelet[3314]: I0123 01:12:14.735606 3314 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-bpf-maps\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735655 kubelet[3314]: I0123 01:12:14.735649 3314 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-xtables-lock\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735708 kubelet[3314]: I0123 01:12:14.735700 3314 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55d93638-13b7-406a-8971-2c9c72e13447-clustermesh-secrets\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735756 kubelet[3314]: I0123 01:12:14.735750 3314 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cilium-cgroup\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735801 kubelet[3314]: I0123 01:12:14.735795 3314 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55d93638-13b7-406a-8971-2c9c72e13447-hubble-tls\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735845 kubelet[3314]: I0123 01:12:14.735839 3314 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55d93638-13b7-406a-8971-2c9c72e13447-cilium-config-path\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735891 kubelet[3314]: I0123 01:12:14.735884 3314 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/346b77ab-3aca-42c1-b651-a5ea5e392a72-cilium-config-path\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735930 kubelet[3314]: I0123 01:12:14.735924 3314 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-host-proc-sys-kernel\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.735972 kubelet[3314]: I0123 01:12:14.735966 3314 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-host-proc-sys-net\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.736018 kubelet[3314]: I0123 01:12:14.736011 3314 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-hostproc\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.736062 kubelet[3314]: I0123 01:12:14.736056 3314 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-cilium-run\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.736106 kubelet[3314]: I0123 01:12:14.736099 3314 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55d93638-13b7-406a-8971-2c9c72e13447-lib-modules\") on node \"ip-172-31-21-166\" DevicePath \"\"" Jan 23 01:12:14.844013 kubelet[3314]: I0123 01:12:14.843982 3314 scope.go:117] "RemoveContainer" containerID="25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894" Jan 23 01:12:14.848716 containerd[1975]: time="2026-01-23T01:12:14.848677456Z" level=info msg="RemoveContainer for \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\"" Jan 23 01:12:14.853628 systemd[1]: Removed slice kubepods-besteffort-pod346b77ab_3aca_42c1_b651_a5ea5e392a72.slice - libcontainer container kubepods-besteffort-pod346b77ab_3aca_42c1_b651_a5ea5e392a72.slice. Jan 23 01:12:14.870213 systemd[1]: Removed slice kubepods-burstable-pod55d93638_13b7_406a_8971_2c9c72e13447.slice - libcontainer container kubepods-burstable-pod55d93638_13b7_406a_8971_2c9c72e13447.slice. Jan 23 01:12:14.887398 kubelet[3314]: I0123 01:12:14.864476 3314 scope.go:117] "RemoveContainer" containerID="25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894" Jan 23 01:12:14.887398 kubelet[3314]: E0123 01:12:14.865259 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\": not found" containerID="25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894" Jan 23 01:12:14.887398 kubelet[3314]: I0123 01:12:14.865290 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894"} err="failed to get container status \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\": rpc error: code = NotFound desc = an error occurred when try to find container \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\": not found" Jan 23 01:12:14.887398 kubelet[3314]: I0123 01:12:14.865380 3314 scope.go:117] "RemoveContainer" containerID="c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0" Jan 23 01:12:14.887398 kubelet[3314]: I0123 01:12:14.878590 3314 scope.go:117] "RemoveContainer" containerID="d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130" Jan 23 01:12:14.887705 containerd[1975]: time="2026-01-23T01:12:14.861323280Z" level=info msg="RemoveContainer for \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\" returns successfully" Jan 23 01:12:14.887705 containerd[1975]: time="2026-01-23T01:12:14.865066022Z" level=error msg="ContainerStatus for \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25a0e63be5de55af17308ed55d13ff9123c2e0de36e6b7d94fdec982d66e6894\": not found" Jan 23 01:12:14.887705 containerd[1975]: time="2026-01-23T01:12:14.866804408Z" level=info msg="RemoveContainer for \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\"" Jan 23 01:12:14.887705 containerd[1975]: time="2026-01-23T01:12:14.878091518Z" level=info msg="RemoveContainer for \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\" returns successfully" Jan 23 01:12:14.887705 containerd[1975]: time="2026-01-23T01:12:14.881306915Z" level=info msg="RemoveContainer for \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\"" Jan 23 01:12:14.870351 systemd[1]: kubepods-burstable-pod55d93638_13b7_406a_8971_2c9c72e13447.slice: Consumed 8.731s CPU time, 223.5M memory peak, 105.2M read from disk, 13.3M written to disk. Jan 23 01:12:14.889184 containerd[1975]: time="2026-01-23T01:12:14.889144387Z" level=info msg="RemoveContainer for \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\" returns successfully" Jan 23 01:12:14.889511 kubelet[3314]: I0123 01:12:14.889481 3314 scope.go:117] "RemoveContainer" containerID="8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd" Jan 23 01:12:14.894288 containerd[1975]: time="2026-01-23T01:12:14.894251055Z" level=info msg="RemoveContainer for \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\"" Jan 23 01:12:14.918994 containerd[1975]: time="2026-01-23T01:12:14.918956395Z" level=info msg="RemoveContainer for \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\" returns successfully" Jan 23 01:12:14.919550 kubelet[3314]: I0123 01:12:14.919492 3314 scope.go:117] "RemoveContainer" containerID="5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04" Jan 23 01:12:14.921840 containerd[1975]: time="2026-01-23T01:12:14.921806230Z" level=info msg="RemoveContainer for \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\"" Jan 23 01:12:14.927737 containerd[1975]: time="2026-01-23T01:12:14.927669561Z" level=info msg="RemoveContainer for \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\" returns successfully" Jan 23 01:12:14.928065 kubelet[3314]: I0123 01:12:14.928015 3314 scope.go:117] "RemoveContainer" containerID="c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0" Jan 23 01:12:14.930224 containerd[1975]: time="2026-01-23T01:12:14.930165768Z" level=info msg="RemoveContainer for \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\"" Jan 23 01:12:14.936226 containerd[1975]: time="2026-01-23T01:12:14.936183855Z" level=info msg="RemoveContainer for \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\" returns successfully" Jan 23 01:12:14.936466 kubelet[3314]: I0123 01:12:14.936438 3314 scope.go:117] "RemoveContainer" containerID="c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0" Jan 23 01:12:14.936782 containerd[1975]: time="2026-01-23T01:12:14.936726133Z" level=error msg="ContainerStatus for \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\": not found" Jan 23 01:12:14.937033 kubelet[3314]: E0123 01:12:14.936876 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\": not found" containerID="c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0" Jan 23 01:12:14.937033 kubelet[3314]: I0123 01:12:14.936996 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0"} err="failed to get container status \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7c352a47117677f05d8055e5f58ddf0f261129d61bff705be3824c3832f89a0\": not found" Jan 23 01:12:14.937033 kubelet[3314]: I0123 01:12:14.937026 3314 scope.go:117] "RemoveContainer" containerID="d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130" Jan 23 01:12:14.937281 containerd[1975]: time="2026-01-23T01:12:14.937250975Z" level=error msg="ContainerStatus for \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\": not found" Jan 23 01:12:14.937464 kubelet[3314]: E0123 01:12:14.937433 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\": not found" containerID="d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130" Jan 23 01:12:14.937464 kubelet[3314]: I0123 01:12:14.937454 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130"} err="failed to get container status \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7200152035e2c88234fa33a0937b3e79d96b82b4791ccd0be528b2d8e1bb130\": not found" Jan 23 01:12:14.937555 kubelet[3314]: I0123 01:12:14.937475 3314 scope.go:117] "RemoveContainer" containerID="8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd" Jan 23 01:12:14.937701 containerd[1975]: time="2026-01-23T01:12:14.937594340Z" level=error msg="ContainerStatus for \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\": not found" Jan 23 01:12:14.937830 kubelet[3314]: E0123 01:12:14.937705 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\": not found" containerID="8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd" Jan 23 01:12:14.937830 kubelet[3314]: I0123 01:12:14.937721 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd"} err="failed to get container status \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cd490cfbec935518e7521d25f7d464f4198688f566128b344cc1d7144c8e7fd\": not found" Jan 23 01:12:14.937830 kubelet[3314]: I0123 01:12:14.937734 3314 scope.go:117] "RemoveContainer" containerID="5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04" Jan 23 01:12:14.937927 containerd[1975]: time="2026-01-23T01:12:14.937851669Z" level=error msg="ContainerStatus for \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\": not found" Jan 23 01:12:14.938021 kubelet[3314]: E0123 01:12:14.937999 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\": not found" containerID="5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04" Jan 23 01:12:14.938072 kubelet[3314]: I0123 01:12:14.938019 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04"} err="failed to get container status \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c37ee394192ee1eaad4f2b131d411dd12dd67f7cf894a62be1bcd6d44f21a04\": not found" Jan 23 01:12:14.938072 kubelet[3314]: I0123 01:12:14.938031 3314 scope.go:117] "RemoveContainer" containerID="c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0" Jan 23 01:12:14.938160 containerd[1975]: time="2026-01-23T01:12:14.938134827Z" level=error msg="ContainerStatus for \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\": not found" Jan 23 01:12:14.938284 kubelet[3314]: E0123 01:12:14.938263 3314 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\": not found" containerID="c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0" Jan 23 01:12:14.938352 kubelet[3314]: I0123 01:12:14.938284 3314 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0"} err="failed to get container status \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\": rpc error: code = NotFound desc = an error occurred when try to find container \"c196ce2cd34eaf38409411037a2c6b2dd3ff8a17bff114f4538e9a6f6c621be0\": not found" Jan 23 01:12:15.248588 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238-shm.mount: Deactivated successfully. Jan 23 01:12:15.248702 systemd[1]: var-lib-kubelet-pods-346b77ab\x2d3aca\x2d42c1\x2db651\x2da5ea5e392a72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfkjj6.mount: Deactivated successfully. Jan 23 01:12:15.248772 systemd[1]: var-lib-kubelet-pods-55d93638\x2d13b7\x2d406a\x2d8971\x2d2c9c72e13447-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 01:12:15.248838 systemd[1]: var-lib-kubelet-pods-55d93638\x2d13b7\x2d406a\x2d8971\x2d2c9c72e13447-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqccsr.mount: Deactivated successfully. Jan 23 01:12:15.248990 systemd[1]: var-lib-kubelet-pods-55d93638\x2d13b7\x2d406a\x2d8971\x2d2c9c72e13447-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 01:12:15.410578 kubelet[3314]: I0123 01:12:15.410506 3314 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346b77ab-3aca-42c1-b651-a5ea5e392a72" path="/var/lib/kubelet/pods/346b77ab-3aca-42c1-b651-a5ea5e392a72/volumes" Jan 23 01:12:15.410932 kubelet[3314]: I0123 01:12:15.410905 3314 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d93638-13b7-406a-8971-2c9c72e13447" path="/var/lib/kubelet/pods/55d93638-13b7-406a-8971-2c9c72e13447/volumes" Jan 23 01:12:16.153391 sshd[5031]: Connection closed by 68.220.241.50 port 45190 Jan 23 01:12:16.154416 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:16.159550 systemd[1]: sshd@21-172.31.21.166:22-68.220.241.50:45190.service: Deactivated successfully. Jan 23 01:12:16.159683 systemd-logind[1956]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:12:16.162454 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:12:16.164366 systemd-logind[1956]: Removed session 22. Jan 23 01:12:16.248430 systemd[1]: Started sshd@22-172.31.21.166:22-68.220.241.50:46666.service - OpenSSH per-connection server daemon (68.220.241.50:46666). Jan 23 01:12:16.599190 ntpd[2169]: Deleting 10 lxc_health, [fe80::7483:8aff:fe91:1758%8]:123, stats: received=0, sent=0, dropped=0, active_time=73 secs Jan 23 01:12:16.599576 ntpd[2169]: 23 Jan 01:12:16 ntpd[2169]: Deleting 10 lxc_health, [fe80::7483:8aff:fe91:1758%8]:123, stats: received=0, sent=0, dropped=0, active_time=73 secs Jan 23 01:12:16.755853 sshd[5179]: Accepted publickey for core from 68.220.241.50 port 46666 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:12:16.757513 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:16.763773 systemd-logind[1956]: New session 23 of user core. Jan 23 01:12:16.770539 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:12:17.576671 kubelet[3314]: I0123 01:12:17.575723 3314 memory_manager.go:355] "RemoveStaleState removing state" podUID="346b77ab-3aca-42c1-b651-a5ea5e392a72" containerName="cilium-operator" Jan 23 01:12:17.578345 kubelet[3314]: I0123 01:12:17.578289 3314 memory_manager.go:355] "RemoveStaleState removing state" podUID="55d93638-13b7-406a-8971-2c9c72e13447" containerName="cilium-agent" Jan 23 01:12:17.594521 systemd[1]: Created slice kubepods-burstable-pod2514a8c6_3cfb_4cdc_80d8_893002069078.slice - libcontainer container kubepods-burstable-pod2514a8c6_3cfb_4cdc_80d8_893002069078.slice. Jan 23 01:12:17.617103 sshd[5182]: Connection closed by 68.220.241.50 port 46666 Jan 23 01:12:17.617535 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:17.625691 systemd[1]: sshd@22-172.31.21.166:22-68.220.241.50:46666.service: Deactivated successfully. Jan 23 01:12:17.626721 systemd-logind[1956]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:12:17.631964 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:12:17.641965 systemd-logind[1956]: Removed session 23. Jan 23 01:12:17.652363 kubelet[3314]: E0123 01:12:17.652300 3314 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:12:17.718590 systemd[1]: Started sshd@23-172.31.21.166:22-68.220.241.50:46672.service - OpenSSH per-connection server daemon (68.220.241.50:46672). Jan 23 01:12:17.752749 kubelet[3314]: I0123 01:12:17.752708 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2514a8c6-3cfb-4cdc-80d8-893002069078-cni-path\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.752749 kubelet[3314]: I0123 01:12:17.752756 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2514a8c6-3cfb-4cdc-80d8-893002069078-cilium-config-path\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753174 kubelet[3314]: I0123 01:12:17.752783 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2514a8c6-3cfb-4cdc-80d8-893002069078-lib-modules\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753174 kubelet[3314]: I0123 01:12:17.752806 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2514a8c6-3cfb-4cdc-80d8-893002069078-cilium-run\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753174 kubelet[3314]: I0123 01:12:17.752826 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2514a8c6-3cfb-4cdc-80d8-893002069078-hostproc\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753174 kubelet[3314]: I0123 01:12:17.752845 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2514a8c6-3cfb-4cdc-80d8-893002069078-xtables-lock\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753174 kubelet[3314]: I0123 01:12:17.752874 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2514a8c6-3cfb-4cdc-80d8-893002069078-clustermesh-secrets\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753174 kubelet[3314]: I0123 01:12:17.752904 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2514a8c6-3cfb-4cdc-80d8-893002069078-etc-cni-netd\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753916 kubelet[3314]: I0123 01:12:17.752929 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2514a8c6-3cfb-4cdc-80d8-893002069078-cilium-ipsec-secrets\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753916 kubelet[3314]: I0123 01:12:17.752952 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2514a8c6-3cfb-4cdc-80d8-893002069078-host-proc-sys-kernel\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753916 kubelet[3314]: I0123 01:12:17.752974 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2514a8c6-3cfb-4cdc-80d8-893002069078-hubble-tls\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753916 kubelet[3314]: I0123 01:12:17.752999 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2514a8c6-3cfb-4cdc-80d8-893002069078-bpf-maps\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753916 kubelet[3314]: I0123 01:12:17.753020 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2514a8c6-3cfb-4cdc-80d8-893002069078-cilium-cgroup\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.753916 kubelet[3314]: I0123 01:12:17.753040 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2514a8c6-3cfb-4cdc-80d8-893002069078-host-proc-sys-net\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.754291 kubelet[3314]: I0123 01:12:17.753074 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwtm\" (UniqueName: \"kubernetes.io/projected/2514a8c6-3cfb-4cdc-80d8-893002069078-kube-api-access-9rwtm\") pod \"cilium-jljz7\" (UID: \"2514a8c6-3cfb-4cdc-80d8-893002069078\") " pod="kube-system/cilium-jljz7" Jan 23 01:12:17.901964 containerd[1975]: time="2026-01-23T01:12:17.901439051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jljz7,Uid:2514a8c6-3cfb-4cdc-80d8-893002069078,Namespace:kube-system,Attempt:0,}" Jan 23 01:12:17.936804 containerd[1975]: time="2026-01-23T01:12:17.936740666Z" level=info msg="connecting to shim 92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948" address="unix:///run/containerd/s/a55fc67225ab6a11a5265fe0f198f43a23c827c3773252bf7a8ff34c3246e307" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:12:17.965645 systemd[1]: Started cri-containerd-92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948.scope - libcontainer container 92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948. Jan 23 01:12:17.998838 containerd[1975]: time="2026-01-23T01:12:17.998791073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jljz7,Uid:2514a8c6-3cfb-4cdc-80d8-893002069078,Namespace:kube-system,Attempt:0,} returns sandbox id \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\"" Jan 23 01:12:18.004016 containerd[1975]: time="2026-01-23T01:12:18.003972706Z" level=info msg="CreateContainer within sandbox \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:12:18.018809 containerd[1975]: time="2026-01-23T01:12:18.018749708Z" level=info msg="Container 9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:18.033316 containerd[1975]: time="2026-01-23T01:12:18.033258745Z" level=info msg="CreateContainer within sandbox \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b\"" Jan 23 01:12:18.033900 containerd[1975]: time="2026-01-23T01:12:18.033839539Z" level=info msg="StartContainer for \"9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b\"" Jan 23 01:12:18.035349 containerd[1975]: time="2026-01-23T01:12:18.035273072Z" level=info msg="connecting to shim 9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b" address="unix:///run/containerd/s/a55fc67225ab6a11a5265fe0f198f43a23c827c3773252bf7a8ff34c3246e307" protocol=ttrpc version=3 Jan 23 01:12:18.054549 systemd[1]: Started cri-containerd-9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b.scope - libcontainer container 9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b. Jan 23 01:12:18.096247 containerd[1975]: time="2026-01-23T01:12:18.096179420Z" level=info msg="StartContainer for \"9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b\" returns successfully" Jan 23 01:12:18.262412 sshd[5192]: Accepted publickey for core from 68.220.241.50 port 46672 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:12:18.263010 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:18.269159 systemd-logind[1956]: New session 24 of user core. Jan 23 01:12:18.280569 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:12:18.329325 systemd[1]: cri-containerd-9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b.scope: Deactivated successfully. Jan 23 01:12:18.329765 systemd[1]: cri-containerd-9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b.scope: Consumed 25ms CPU time, 9.6M memory peak, 3.1M read from disk. Jan 23 01:12:18.330735 containerd[1975]: time="2026-01-23T01:12:18.330640712Z" level=info msg="received container exit event container_id:\"9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b\" id:\"9c5138d12b28dc0770bfd29df5d205f74401aaf92529b0c423b61ee1aa3b4a8b\" pid:5258 exited_at:{seconds:1769130738 nanos:330382304}" Jan 23 01:12:18.640759 sshd[5275]: Connection closed by 68.220.241.50 port 46672 Jan 23 01:12:18.642522 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:18.646460 systemd-logind[1956]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:12:18.647048 systemd[1]: sshd@23-172.31.21.166:22-68.220.241.50:46672.service: Deactivated successfully. Jan 23 01:12:18.649641 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:12:18.651486 systemd-logind[1956]: Removed session 24. Jan 23 01:12:18.726073 systemd[1]: Started sshd@24-172.31.21.166:22-68.220.241.50:46686.service - OpenSSH per-connection server daemon (68.220.241.50:46686). Jan 23 01:12:18.899186 containerd[1975]: time="2026-01-23T01:12:18.898881169Z" level=info msg="CreateContainer within sandbox \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:12:18.921363 containerd[1975]: time="2026-01-23T01:12:18.918207178Z" level=info msg="Container 722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:18.935440 containerd[1975]: time="2026-01-23T01:12:18.935079030Z" level=info msg="CreateContainer within sandbox \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81\"" Jan 23 01:12:18.937494 containerd[1975]: time="2026-01-23T01:12:18.937458353Z" level=info msg="StartContainer for \"722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81\"" Jan 23 01:12:18.939830 containerd[1975]: time="2026-01-23T01:12:18.938316285Z" level=info msg="connecting to shim 722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81" address="unix:///run/containerd/s/a55fc67225ab6a11a5265fe0f198f43a23c827c3773252bf7a8ff34c3246e307" protocol=ttrpc version=3 Jan 23 01:12:18.973617 systemd[1]: Started cri-containerd-722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81.scope - libcontainer container 722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81. Jan 23 01:12:19.007967 containerd[1975]: time="2026-01-23T01:12:19.007929671Z" level=info msg="StartContainer for \"722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81\" returns successfully" Jan 23 01:12:19.171202 systemd[1]: cri-containerd-722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81.scope: Deactivated successfully. Jan 23 01:12:19.171506 systemd[1]: cri-containerd-722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81.scope: Consumed 21ms CPU time, 7.5M memory peak, 2.1M read from disk. Jan 23 01:12:19.174408 containerd[1975]: time="2026-01-23T01:12:19.174224220Z" level=info msg="received container exit event container_id:\"722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81\" id:\"722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81\" pid:5316 exited_at:{seconds:1769130739 nanos:172670140}" Jan 23 01:12:19.199970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-722d97a3638c6397d45c29eff51576b54eb2905702e544a3018ec9238944ee81-rootfs.mount: Deactivated successfully. Jan 23 01:12:19.249532 sshd[5301]: Accepted publickey for core from 68.220.241.50 port 46686 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:12:19.250868 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:19.256435 systemd-logind[1956]: New session 25 of user core. Jan 23 01:12:19.261592 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:12:19.898298 containerd[1975]: time="2026-01-23T01:12:19.898225965Z" level=info msg="CreateContainer within sandbox \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:12:19.906443 kubelet[3314]: I0123 01:12:19.906392 3314 setters.go:602] "Node became not ready" node="ip-172-31-21-166" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T01:12:19Z","lastTransitionTime":"2026-01-23T01:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 01:12:19.922204 containerd[1975]: time="2026-01-23T01:12:19.921871224Z" level=info msg="Container e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:19.942126 containerd[1975]: time="2026-01-23T01:12:19.942060500Z" level=info msg="CreateContainer within sandbox \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab\"" Jan 23 01:12:19.942830 containerd[1975]: time="2026-01-23T01:12:19.942766216Z" level=info msg="StartContainer for \"e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab\"" Jan 23 01:12:19.944355 containerd[1975]: time="2026-01-23T01:12:19.944278713Z" level=info msg="connecting to shim e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab" address="unix:///run/containerd/s/a55fc67225ab6a11a5265fe0f198f43a23c827c3773252bf7a8ff34c3246e307" protocol=ttrpc version=3 Jan 23 01:12:19.972689 systemd[1]: Started cri-containerd-e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab.scope - libcontainer container e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab. Jan 23 01:12:20.069586 containerd[1975]: time="2026-01-23T01:12:20.069459278Z" level=info msg="StartContainer for \"e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab\" returns successfully" Jan 23 01:12:20.139286 systemd[1]: cri-containerd-e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab.scope: Deactivated successfully. Jan 23 01:12:20.141702 containerd[1975]: time="2026-01-23T01:12:20.141661293Z" level=info msg="received container exit event container_id:\"e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab\" id:\"e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab\" pid:5367 exited_at:{seconds:1769130740 nanos:141262995}" Jan 23 01:12:20.172754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2267b0aef1dbad504476e72ebd0883e2c6bea68f8d4392b1d9a891b11b7dcab-rootfs.mount: Deactivated successfully. Jan 23 01:12:20.906674 containerd[1975]: time="2026-01-23T01:12:20.906364619Z" level=info msg="CreateContainer within sandbox \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:12:20.926555 containerd[1975]: time="2026-01-23T01:12:20.924549082Z" level=info msg="Container e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:20.936361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110310270.mount: Deactivated successfully. Jan 23 01:12:20.946812 containerd[1975]: time="2026-01-23T01:12:20.946765354Z" level=info msg="CreateContainer within sandbox \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729\"" Jan 23 01:12:20.948272 containerd[1975]: time="2026-01-23T01:12:20.947911425Z" level=info msg="StartContainer for \"e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729\"" Jan 23 01:12:20.949754 containerd[1975]: time="2026-01-23T01:12:20.949718315Z" level=info msg="connecting to shim e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729" address="unix:///run/containerd/s/a55fc67225ab6a11a5265fe0f198f43a23c827c3773252bf7a8ff34c3246e307" protocol=ttrpc version=3 Jan 23 01:12:20.979595 systemd[1]: Started cri-containerd-e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729.scope - libcontainer container e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729. Jan 23 01:12:21.021228 systemd[1]: cri-containerd-e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729.scope: Deactivated successfully. Jan 23 01:12:21.024146 containerd[1975]: time="2026-01-23T01:12:21.024029976Z" level=info msg="received container exit event container_id:\"e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729\" id:\"e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729\" pid:5408 exited_at:{seconds:1769130741 nanos:23033997}" Jan 23 01:12:21.027055 containerd[1975]: time="2026-01-23T01:12:21.027023823Z" level=info msg="StartContainer for \"e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729\" returns successfully" Jan 23 01:12:21.058588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1919d6fe18f96661485c476d42b4724da03f7522ad4a02ea6b37248633d7729-rootfs.mount: Deactivated successfully. Jan 23 01:12:21.912754 containerd[1975]: time="2026-01-23T01:12:21.912695301Z" level=info msg="CreateContainer within sandbox \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:12:21.941286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649844410.mount: Deactivated successfully. Jan 23 01:12:21.946235 containerd[1975]: time="2026-01-23T01:12:21.945558381Z" level=info msg="Container c913782c53bfd931357b20b3a07b9ac755095c06fddc6ef1abe5e62f031cdf6c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:21.947310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165080217.mount: Deactivated successfully. Jan 23 01:12:21.959948 containerd[1975]: time="2026-01-23T01:12:21.959908570Z" level=info msg="CreateContainer within sandbox \"92c128b5bbc4cb0f18b2adb709ee52da11297135f81e2c45d61b8baa286a4948\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c913782c53bfd931357b20b3a07b9ac755095c06fddc6ef1abe5e62f031cdf6c\"" Jan 23 01:12:21.960613 containerd[1975]: time="2026-01-23T01:12:21.960580058Z" level=info msg="StartContainer for \"c913782c53bfd931357b20b3a07b9ac755095c06fddc6ef1abe5e62f031cdf6c\"" Jan 23 01:12:21.961573 containerd[1975]: time="2026-01-23T01:12:21.961535894Z" level=info msg="connecting to shim c913782c53bfd931357b20b3a07b9ac755095c06fddc6ef1abe5e62f031cdf6c" address="unix:///run/containerd/s/a55fc67225ab6a11a5265fe0f198f43a23c827c3773252bf7a8ff34c3246e307" protocol=ttrpc version=3 Jan 23 01:12:21.985565 systemd[1]: Started cri-containerd-c913782c53bfd931357b20b3a07b9ac755095c06fddc6ef1abe5e62f031cdf6c.scope - libcontainer container c913782c53bfd931357b20b3a07b9ac755095c06fddc6ef1abe5e62f031cdf6c. Jan 23 01:12:22.036088 containerd[1975]: time="2026-01-23T01:12:22.036051155Z" level=info msg="StartContainer for \"c913782c53bfd931357b20b3a07b9ac755095c06fddc6ef1abe5e62f031cdf6c\" returns successfully" Jan 23 01:12:22.693442 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 01:12:25.787925 (udev-worker)[5995]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:12:25.789934 (udev-worker)[5996]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:12:25.790907 systemd-networkd[1858]: lxc_health: Link UP Jan 23 01:12:25.798673 systemd-networkd[1858]: lxc_health: Gained carrier Jan 23 01:12:25.948291 kubelet[3314]: I0123 01:12:25.948221 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jljz7" podStartSLOduration=8.948200925 podStartE2EDuration="8.948200925s" podCreationTimestamp="2026-01-23 01:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:12:22.936457892 +0000 UTC m=+105.774018151" watchObservedRunningTime="2026-01-23 01:12:25.948200925 +0000 UTC m=+108.785761178" Jan 23 01:12:26.881616 systemd-networkd[1858]: lxc_health: Gained IPv6LL Jan 23 01:12:29.599376 ntpd[2169]: Listen normally on 13 lxc_health [fe80::f41e:50ff:fe00:ebee%14]:123 Jan 23 01:12:29.599880 ntpd[2169]: 23 Jan 01:12:29 ntpd[2169]: Listen normally on 13 lxc_health [fe80::f41e:50ff:fe00:ebee%14]:123 Jan 23 01:12:33.302247 sshd[5347]: Connection closed by 68.220.241.50 port 46686 Jan 23 01:12:33.304651 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:33.310656 systemd[1]: sshd@24-172.31.21.166:22-68.220.241.50:46686.service: Deactivated successfully. Jan 23 01:12:33.313256 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:12:33.315084 systemd-logind[1956]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:12:33.316781 systemd-logind[1956]: Removed session 25. Jan 23 01:12:37.452563 containerd[1975]: time="2026-01-23T01:12:37.452502097Z" level=info msg="StopPodSandbox for \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\"" Jan 23 01:12:37.453470 containerd[1975]: time="2026-01-23T01:12:37.453442889Z" level=info msg="TearDown network for sandbox \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" successfully" Jan 23 01:12:37.453470 containerd[1975]: time="2026-01-23T01:12:37.453465353Z" level=info msg="StopPodSandbox for \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" returns successfully" Jan 23 01:12:37.453865 containerd[1975]: time="2026-01-23T01:12:37.453829040Z" level=info msg="RemovePodSandbox for \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\"" Jan 23 01:12:37.463635 containerd[1975]: time="2026-01-23T01:12:37.463569777Z" level=info msg="Forcibly stopping sandbox \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\"" Jan 23 01:12:37.465397 containerd[1975]: time="2026-01-23T01:12:37.464870994Z" level=info msg="TearDown network for sandbox \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" successfully" Jan 23 01:12:37.469789 containerd[1975]: time="2026-01-23T01:12:37.469723176Z" level=info msg="Ensure that sandbox 17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3 in task-service has been cleanup successfully" Jan 23 01:12:37.476406 containerd[1975]: time="2026-01-23T01:12:37.476355315Z" level=info msg="RemovePodSandbox \"17b7cbdf5c8cecccdb5cd33336bf220cf4f721272468cce858cb04f621177da3\" returns successfully" Jan 23 01:12:37.477085 containerd[1975]: time="2026-01-23T01:12:37.476886167Z" level=info msg="StopPodSandbox for \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\"" Jan 23 01:12:37.477226 containerd[1975]: time="2026-01-23T01:12:37.477202577Z" level=info msg="TearDown network for sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" successfully" Jan 23 01:12:37.477226 containerd[1975]: time="2026-01-23T01:12:37.477221767Z" level=info msg="StopPodSandbox for \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" returns successfully" Jan 23 01:12:37.477564 containerd[1975]: time="2026-01-23T01:12:37.477532248Z" level=info msg="RemovePodSandbox for \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\"" Jan 23 01:12:37.477564 containerd[1975]: time="2026-01-23T01:12:37.477554304Z" level=info msg="Forcibly stopping sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\"" Jan 23 01:12:37.477674 containerd[1975]: time="2026-01-23T01:12:37.477645082Z" level=info msg="TearDown network for sandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" successfully" Jan 23 01:12:37.478720 containerd[1975]: time="2026-01-23T01:12:37.478688496Z" level=info msg="Ensure that sandbox 454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238 in task-service has been cleanup successfully" Jan 23 01:12:37.485525 containerd[1975]: time="2026-01-23T01:12:37.485457108Z" level=info msg="RemovePodSandbox \"454c880c8a7e96999daf8ecf6ec67013c69c258d5161f1b736413abd77fcb238\" returns successfully" Jan 23 01:12:48.471802 systemd[1]: cri-containerd-0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66.scope: Deactivated successfully. Jan 23 01:12:48.472605 systemd[1]: cri-containerd-0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66.scope: Consumed 3.939s CPU time, 73.9M memory peak, 23.9M read from disk. Jan 23 01:12:48.475653 containerd[1975]: time="2026-01-23T01:12:48.475616773Z" level=info msg="received container exit event container_id:\"0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66\" id:\"0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66\" pid:3135 exit_status:1 exited_at:{seconds:1769130768 nanos:474867342}" Jan 23 01:12:48.505397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66-rootfs.mount: Deactivated successfully. Jan 23 01:12:48.989195 kubelet[3314]: I0123 01:12:48.989151 3314 scope.go:117] "RemoveContainer" containerID="0ffbd25f231cbf8d1fcab01b545d6f893bcf01eea0bffd51f8473957b59cea66" Jan 23 01:12:48.991040 containerd[1975]: time="2026-01-23T01:12:48.991006754Z" level=info msg="CreateContainer within sandbox \"663b6edecd229286ab05de910b489d29d57d05b1b45ef3ec96c55180c2fa3308\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 01:12:49.010360 containerd[1975]: time="2026-01-23T01:12:49.008412130Z" level=info msg="Container 9c6fb2693809be13ca7de3e50bc14b3ed05736b428a63f6c9e5d1b1859b97a3a: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:49.023289 containerd[1975]: time="2026-01-23T01:12:49.023238462Z" level=info msg="CreateContainer within sandbox \"663b6edecd229286ab05de910b489d29d57d05b1b45ef3ec96c55180c2fa3308\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9c6fb2693809be13ca7de3e50bc14b3ed05736b428a63f6c9e5d1b1859b97a3a\"" Jan 23 01:12:49.025288 containerd[1975]: time="2026-01-23T01:12:49.023941938Z" level=info msg="StartContainer for \"9c6fb2693809be13ca7de3e50bc14b3ed05736b428a63f6c9e5d1b1859b97a3a\"" Jan 23 01:12:49.025467 containerd[1975]: time="2026-01-23T01:12:49.025435672Z" level=info msg="connecting to shim 9c6fb2693809be13ca7de3e50bc14b3ed05736b428a63f6c9e5d1b1859b97a3a" address="unix:///run/containerd/s/b7e89d67b9ed61245d8de0877a510b5a1d8b1eafdcfa018e6cd12b1502d3b32b" protocol=ttrpc version=3 Jan 23 01:12:49.058622 systemd[1]: Started cri-containerd-9c6fb2693809be13ca7de3e50bc14b3ed05736b428a63f6c9e5d1b1859b97a3a.scope - libcontainer container 9c6fb2693809be13ca7de3e50bc14b3ed05736b428a63f6c9e5d1b1859b97a3a. Jan 23 01:12:49.131525 containerd[1975]: time="2026-01-23T01:12:49.131433715Z" level=info msg="StartContainer for \"9c6fb2693809be13ca7de3e50bc14b3ed05736b428a63f6c9e5d1b1859b97a3a\" returns successfully" Jan 23 01:12:49.623502 kubelet[3314]: E0123 01:12:49.623077 3314 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-166?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 01:12:53.547551 systemd[1]: cri-containerd-473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578.scope: Deactivated successfully. Jan 23 01:12:53.548400 systemd[1]: cri-containerd-473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578.scope: Consumed 2.572s CPU time, 32.2M memory peak, 15.9M read from disk. Jan 23 01:12:53.552702 containerd[1975]: time="2026-01-23T01:12:53.552667549Z" level=info msg="received container exit event container_id:\"473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578\" id:\"473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578\" pid:3162 exit_status:1 exited_at:{seconds:1769130773 nanos:552359608}" Jan 23 01:12:53.580676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578-rootfs.mount: Deactivated successfully. Jan 23 01:12:54.005822 kubelet[3314]: I0123 01:12:54.005785 3314 scope.go:117] "RemoveContainer" containerID="473fea9d6998dcb6a3737009df44eb0d5bb3e29d758191b23c85b062dcc2d578" Jan 23 01:12:54.007878 containerd[1975]: time="2026-01-23T01:12:54.007835628Z" level=info msg="CreateContainer within sandbox \"d0c535f3775f166ed4d9a65811600721374e389953b21bcd17dcf1e46e585f37\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 01:12:54.024264 containerd[1975]: time="2026-01-23T01:12:54.024092285Z" level=info msg="Container 54dedc8c22de714dec9b125580037d56bea290314ce912613ad9b93163aca5a1: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:54.034020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2311010890.mount: Deactivated successfully. Jan 23 01:12:54.037777 containerd[1975]: time="2026-01-23T01:12:54.037735909Z" level=info msg="CreateContainer within sandbox \"d0c535f3775f166ed4d9a65811600721374e389953b21bcd17dcf1e46e585f37\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"54dedc8c22de714dec9b125580037d56bea290314ce912613ad9b93163aca5a1\"" Jan 23 01:12:54.038337 containerd[1975]: time="2026-01-23T01:12:54.038306577Z" level=info msg="StartContainer for \"54dedc8c22de714dec9b125580037d56bea290314ce912613ad9b93163aca5a1\"" Jan 23 01:12:54.039620 containerd[1975]: time="2026-01-23T01:12:54.039561174Z" level=info msg="connecting to shim 54dedc8c22de714dec9b125580037d56bea290314ce912613ad9b93163aca5a1" address="unix:///run/containerd/s/37fff392f96666c264842641f1071a829acb748f5435c03e2499c30232b7625d" protocol=ttrpc version=3 Jan 23 01:12:54.062665 systemd[1]: Started cri-containerd-54dedc8c22de714dec9b125580037d56bea290314ce912613ad9b93163aca5a1.scope - libcontainer container 54dedc8c22de714dec9b125580037d56bea290314ce912613ad9b93163aca5a1. Jan 23 01:12:54.121746 containerd[1975]: time="2026-01-23T01:12:54.121701487Z" level=info msg="StartContainer for \"54dedc8c22de714dec9b125580037d56bea290314ce912613ad9b93163aca5a1\" returns successfully" Jan 23 01:12:59.623522 kubelet[3314]: E0123 01:12:59.623388 3314 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-166?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 01:13:09.625742 kubelet[3314]: E0123 01:13:09.625060 3314 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-166?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"